00:00:00.000 Started by upstream project "autotest-per-patch" build number 122846 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.076 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.173 > git --version # 'git version 2.39.2' 00:00:00.173 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.174 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.174 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.535 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.546 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.557 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:03.557 > git config core.sparsecheckout # timeout=10 00:00:03.569 > git read-tree -mu HEAD # timeout=10 00:00:03.585 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:03.603 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:03.604 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:03.697 [Pipeline] Start of Pipeline 00:00:03.708 [Pipeline] library 00:00:03.709 Loading library shm_lib@master 00:00:03.710 Library shm_lib@master is cached. Copying from home. 00:00:03.731 [Pipeline] node 00:00:03.738 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.739 [Pipeline] { 00:00:03.750 [Pipeline] catchError 00:00:03.751 [Pipeline] { 00:00:03.764 [Pipeline] wrap 00:00:03.773 [Pipeline] { 00:00:03.780 [Pipeline] stage 00:00:03.781 [Pipeline] { (Prologue) 00:00:03.956 [Pipeline] sh 00:00:04.239 + logger -p user.info -t JENKINS-CI 00:00:04.252 [Pipeline] echo 00:00:04.253 Node: GP11 00:00:04.261 [Pipeline] sh 00:00:04.564 [Pipeline] setCustomBuildProperty 00:00:04.576 [Pipeline] echo 00:00:04.578 Cleanup processes 00:00:04.583 [Pipeline] sh 00:00:04.866 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.866 3154617 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.879 [Pipeline] sh 00:00:05.162 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.162 ++ grep -v 'sudo pgrep' 00:00:05.162 ++ awk '{print $1}' 00:00:05.162 + sudo kill -9 00:00:05.162 + true 00:00:05.177 [Pipeline] cleanWs 00:00:05.186 [WS-CLEANUP] Deleting project workspace... 00:00:05.186 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.192 [WS-CLEANUP] done 00:00:05.197 [Pipeline] setCustomBuildProperty 00:00:05.217 [Pipeline] sh 00:00:05.504 + sudo git config --global --replace-all safe.directory '*' 00:00:05.610 [Pipeline] nodesByLabel 00:00:05.611 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.618 [Pipeline] httpRequest 00:00:05.622 HttpMethod: GET 00:00:05.622 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.627 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.630 Response Code: HTTP/1.1 200 OK 00:00:05.631 Success: Status code 200 is in the accepted range: 200,404 00:00:05.631 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.499 [Pipeline] sh 00:00:06.779 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.794 [Pipeline] httpRequest 00:00:06.798 HttpMethod: GET 00:00:06.798 URL: http://10.211.164.101/packages/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:00:06.799 Sending request to url: http://10.211.164.101/packages/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:00:06.802 Response Code: HTTP/1.1 200 OK 00:00:06.803 Success: Status code 200 is in the accepted range: 200,404 00:00:06.803 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:00:24.570 [Pipeline] sh 00:00:24.847 + tar --no-same-owner -xf spdk_2dc74a001856d1e04b15939137e0bb63d27e8571.tar.gz 00:00:27.383 [Pipeline] sh 00:00:27.664 + git -C spdk log --oneline -n5 00:00:27.665 2dc74a001 raid: free base bdev earlier during removal 00:00:27.665 6518a98df raid: remove base_bdev_lock 00:00:27.665 96aff3c95 raid: fix some issues in raid_bdev_write_config_json() 00:00:27.665 f9cccaa84 raid: examine other bdevs when starting from superblock 00:00:27.665 688de1b9f raid: factor out a function to get a raid bdev by uuid 00:00:27.676 [Pipeline] } 00:00:27.693 [Pipeline] // stage 00:00:27.703 [Pipeline] stage 00:00:27.705 [Pipeline] { (Prepare) 00:00:27.743 [Pipeline] writeFile 00:00:27.787 [Pipeline] sh 00:00:28.060 + logger -p user.info -t JENKINS-CI 00:00:28.072 [Pipeline] sh 00:00:28.382 + logger -p user.info -t JENKINS-CI 00:00:28.393 [Pipeline] sh 00:00:28.669 + cat autorun-spdk.conf 00:00:28.669 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.669 SPDK_TEST_NVMF=1 00:00:28.669 SPDK_TEST_NVME_CLI=1 00:00:28.669 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.669 SPDK_TEST_NVMF_NICS=e810 00:00:28.669 SPDK_TEST_VFIOUSER=1 00:00:28.669 SPDK_RUN_UBSAN=1 00:00:28.669 NET_TYPE=phy 00:00:28.676 RUN_NIGHTLY=0 00:00:28.679 [Pipeline] readFile 00:00:28.700 [Pipeline] withEnv 00:00:28.702 [Pipeline] { 00:00:28.713 [Pipeline] sh 00:00:28.990 + set -ex 00:00:28.990 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:28.990 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:28.990 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.990 ++ SPDK_TEST_NVMF=1 00:00:28.990 ++ SPDK_TEST_NVME_CLI=1 00:00:28.990 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.990 ++ SPDK_TEST_NVMF_NICS=e810 00:00:28.990 ++ SPDK_TEST_VFIOUSER=1 00:00:28.990 ++ SPDK_RUN_UBSAN=1 00:00:28.990 ++ NET_TYPE=phy 00:00:28.990 ++ RUN_NIGHTLY=0 00:00:28.990 + case $SPDK_TEST_NVMF_NICS in 00:00:28.990 + DRIVERS=ice 00:00:28.990 + [[ tcp == \r\d\m\a ]] 00:00:28.990 + [[ -n ice ]] 00:00:28.990 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:28.990 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:28.990 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:28.990 rmmod: ERROR: Module irdma is not currently loaded 00:00:28.990 rmmod: ERROR: Module i40iw is not currently loaded 00:00:28.990 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:28.990 + true 00:00:28.990 + for D in $DRIVERS 00:00:28.990 + sudo modprobe ice 00:00:28.990 + exit 0 00:00:29.001 [Pipeline] } 00:00:29.023 [Pipeline] // withEnv 00:00:29.028 [Pipeline] } 00:00:29.044 [Pipeline] // stage 00:00:29.054 [Pipeline] catchError 00:00:29.056 [Pipeline] { 00:00:29.070 [Pipeline] timeout 00:00:29.070 Timeout set to expire in 40 min 00:00:29.072 [Pipeline] { 00:00:29.086 [Pipeline] stage 00:00:29.088 [Pipeline] { (Tests) 00:00:29.102 [Pipeline] sh 00:00:29.378 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.378 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.378 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.378 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:29.378 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:29.378 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.378 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:29.378 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.378 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.378 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.378 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.378 + source /etc/os-release 00:00:29.378 ++ NAME='Fedora Linux' 00:00:29.378 ++ VERSION='38 (Cloud Edition)' 00:00:29.378 ++ ID=fedora 00:00:29.378 ++ VERSION_ID=38 00:00:29.378 ++ VERSION_CODENAME= 00:00:29.378 ++ PLATFORM_ID=platform:f38 00:00:29.378 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.378 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.378 ++ LOGO=fedora-logo-icon 00:00:29.378 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.378 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.378 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.378 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.378 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.378 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.378 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.378 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.378 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.378 ++ SUPPORT_END=2024-05-14 00:00:29.378 ++ VARIANT='Cloud Edition' 00:00:29.378 ++ VARIANT_ID=cloud 00:00:29.378 + uname -a 00:00:29.378 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.378 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:30.753 Hugepages 00:00:30.753 node hugesize free / total 00:00:30.753 node0 1048576kB 0 / 0 00:00:30.753 node0 2048kB 0 / 0 00:00:30.753 node1 1048576kB 0 / 0 00:00:30.753 node1 2048kB 0 / 0 00:00:30.753 00:00:30.753 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:30.753 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:30.753 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:30.753 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:30.753 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:30.753 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:30.753 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:30.753 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:30.753 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:30.753 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:30.753 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:30.753 + rm -f /tmp/spdk-ld-path 00:00:30.753 + source autorun-spdk.conf 00:00:30.753 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.753 ++ SPDK_TEST_NVMF=1 00:00:30.753 ++ SPDK_TEST_NVME_CLI=1 00:00:30.753 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.753 ++ SPDK_TEST_NVMF_NICS=e810 00:00:30.753 ++ SPDK_TEST_VFIOUSER=1 00:00:30.753 ++ SPDK_RUN_UBSAN=1 00:00:30.753 ++ NET_TYPE=phy 00:00:30.753 ++ RUN_NIGHTLY=0 00:00:30.753 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:30.753 + [[ -n '' ]] 00:00:30.753 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:30.753 + for M in /var/spdk/build-*-manifest.txt 00:00:30.753 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:30.753 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:30.753 + for M in /var/spdk/build-*-manifest.txt 00:00:30.753 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:30.753 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:30.753 ++ uname 00:00:30.753 + [[ Linux == \L\i\n\u\x ]] 00:00:30.753 + sudo dmesg -T 00:00:30.753 + sudo dmesg --clear 00:00:30.753 + dmesg_pid=3155379 00:00:30.753 + [[ Fedora Linux == FreeBSD ]] 00:00:30.753 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:30.753 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:30.753 + sudo dmesg -Tw 00:00:30.753 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:30.753 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:30.753 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:30.753 + [[ -x /usr/src/fio-static/fio ]] 00:00:30.753 + export FIO_BIN=/usr/src/fio-static/fio 00:00:30.753 + FIO_BIN=/usr/src/fio-static/fio 00:00:30.753 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:30.753 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:30.753 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:30.753 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:30.753 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:30.753 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:30.753 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:30.753 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:30.753 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:30.753 Test configuration: 00:00:30.753 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.753 SPDK_TEST_NVMF=1 00:00:30.753 SPDK_TEST_NVME_CLI=1 00:00:30.753 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.753 SPDK_TEST_NVMF_NICS=e810 00:00:30.753 SPDK_TEST_VFIOUSER=1 00:00:30.753 SPDK_RUN_UBSAN=1 00:00:30.753 NET_TYPE=phy 00:00:30.753 RUN_NIGHTLY=0 04:01:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:30.753 04:01:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:30.753 04:01:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:30.753 04:01:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:30.753 04:01:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.753 04:01:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.753 04:01:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.753 04:01:18 -- paths/export.sh@5 -- $ export PATH 00:00:30.753 04:01:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.753 04:01:18 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:30.753 04:01:18 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:30.753 04:01:18 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715738478.XXXXXX 00:00:30.753 04:01:18 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715738478.wwwGX7 00:00:30.753 04:01:18 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:30.753 04:01:18 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:30.753 04:01:18 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:30.753 04:01:18 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:30.753 04:01:18 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:30.753 04:01:18 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:30.753 04:01:18 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:30.753 04:01:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:30.753 04:01:18 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:30.753 04:01:18 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:30.753 04:01:18 -- pm/common@17 -- $ local monitor 00:00:30.753 04:01:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.753 04:01:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.753 04:01:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.753 04:01:18 -- pm/common@21 -- $ date +%s 00:00:30.753 04:01:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.753 04:01:18 -- pm/common@21 -- $ date +%s 00:00:30.753 04:01:18 -- pm/common@25 -- $ sleep 1 00:00:30.753 04:01:18 -- pm/common@21 -- $ date +%s 00:00:30.753 04:01:18 -- pm/common@21 -- $ date +%s 00:00:30.753 04:01:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715738478 00:00:30.753 04:01:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715738478 00:00:30.753 04:01:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715738478 00:00:30.753 04:01:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715738478 00:00:30.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715738478_collect-vmstat.pm.log 00:00:30.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715738478_collect-cpu-load.pm.log 00:00:30.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715738478_collect-cpu-temp.pm.log 00:00:30.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715738478_collect-bmc-pm.bmc.pm.log 00:00:31.687 04:01:19 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:31.687 04:01:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:31.687 04:01:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:31.687 04:01:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:31.687 04:01:19 -- spdk/autobuild.sh@16 -- $ date -u 00:00:31.687 Wed May 15 02:01:19 AM UTC 2024 00:00:31.687 04:01:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:31.687 v24.05-pre-653-g2dc74a001 00:00:31.687 04:01:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:31.687 04:01:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:31.687 04:01:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:31.687 04:01:19 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:31.687 04:01:19 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:31.687 04:01:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.944 ************************************ 00:00:31.944 START TEST ubsan 00:00:31.944 ************************************ 00:00:31.944 04:01:19 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:31.944 using ubsan 00:00:31.944 00:00:31.944 real 0m0.000s 00:00:31.944 user 0m0.000s 00:00:31.944 sys 0m0.000s 00:00:31.944 04:01:19 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:31.944 04:01:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:31.944 ************************************ 00:00:31.944 END TEST ubsan 00:00:31.944 ************************************ 00:00:31.944 04:01:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:31.944 04:01:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:31.944 04:01:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:31.944 04:01:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:31.944 04:01:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:31.944 04:01:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:31.944 04:01:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:31.944 04:01:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:31.944 04:01:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:31.944 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:31.944 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:32.202 Using 'verbs' RDMA provider 00:00:42.735 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:52.708 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:52.708 Creating mk/config.mk...done. 00:00:52.708 Creating mk/cc.flags.mk...done. 00:00:52.708 Type 'make' to build. 00:00:52.708 04:01:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:52.708 04:01:40 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:52.708 04:01:40 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:52.708 04:01:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.708 ************************************ 00:00:52.708 START TEST make 00:00:52.708 ************************************ 00:00:52.708 04:01:40 make -- common/autotest_common.sh@1121 -- $ make -j48 00:00:52.708 make[1]: Nothing to be done for 'all'. 00:00:54.114 The Meson build system 00:00:54.114 Version: 1.3.1 00:00:54.114 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:54.114 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:54.114 Build type: native build 00:00:54.115 Project name: libvfio-user 00:00:54.115 Project version: 0.0.1 00:00:54.115 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:54.115 C linker for the host machine: cc ld.bfd 2.39-16 00:00:54.115 Host machine cpu family: x86_64 00:00:54.115 Host machine cpu: x86_64 00:00:54.115 Run-time dependency threads found: YES 00:00:54.115 Library dl found: YES 00:00:54.115 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:54.115 Run-time dependency json-c found: YES 0.17 00:00:54.115 Run-time dependency cmocka found: YES 1.1.7 00:00:54.115 Program pytest-3 found: NO 00:00:54.115 Program flake8 found: NO 00:00:54.115 Program misspell-fixer found: NO 00:00:54.115 Program restructuredtext-lint found: NO 00:00:54.115 Program valgrind found: YES (/usr/bin/valgrind) 00:00:54.115 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:54.115 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:54.115 Compiler for C supports arguments -Wwrite-strings: YES 00:00:54.115 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:54.115 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:54.115 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:54.115 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:54.115 Build targets in project: 8 00:00:54.115 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:54.115 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:54.115 00:00:54.115 libvfio-user 0.0.1 00:00:54.115 00:00:54.115 User defined options 00:00:54.115 buildtype : debug 00:00:54.115 default_library: shared 00:00:54.115 libdir : /usr/local/lib 00:00:54.115 00:00:54.115 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:54.685 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:54.955 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:54.955 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:54.955 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:54.955 [4/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:54.955 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:54.955 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:54.955 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:55.218 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:55.218 [9/37] Compiling C object samples/server.p/server.c.o 00:00:55.218 [10/37] Compiling C object samples/null.p/null.c.o 00:00:55.218 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:55.218 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:55.218 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:55.218 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:55.218 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:55.218 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:55.218 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:55.218 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:55.218 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:55.218 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:55.218 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:55.218 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:55.218 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:55.218 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:55.218 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:55.218 [26/37] Compiling C object samples/client.p/client.c.o 00:00:55.218 [27/37] Linking target samples/client 00:00:55.218 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:55.479 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:55.479 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:55.479 [31/37] Linking target test/unit_tests 00:00:55.479 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:55.739 [33/37] Linking target samples/gpio-pci-idio-16 00:00:55.739 [34/37] Linking target samples/lspci 00:00:55.739 [35/37] Linking target samples/null 00:00:55.739 [36/37] Linking target samples/shadow_ioeventfd_server 00:00:55.739 [37/37] Linking target samples/server 00:00:55.739 INFO: autodetecting backend as ninja 00:00:55.739 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:55.739 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:56.681 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:56.681 ninja: no work to do. 00:01:01.952 The Meson build system 00:01:01.952 Version: 1.3.1 00:01:01.952 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:01.952 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:01.952 Build type: native build 00:01:01.952 Program cat found: YES (/usr/bin/cat) 00:01:01.952 Project name: DPDK 00:01:01.952 Project version: 23.11.0 00:01:01.952 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:01.952 C linker for the host machine: cc ld.bfd 2.39-16 00:01:01.952 Host machine cpu family: x86_64 00:01:01.952 Host machine cpu: x86_64 00:01:01.952 Message: ## Building in Developer Mode ## 00:01:01.952 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:01.952 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:01.952 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:01.952 Program python3 found: YES (/usr/bin/python3) 00:01:01.952 Program cat found: YES (/usr/bin/cat) 00:01:01.952 Compiler for C supports arguments -march=native: YES 00:01:01.952 Checking for size of "void *" : 8 00:01:01.952 Checking for size of "void *" : 8 (cached) 00:01:01.952 Library m found: YES 00:01:01.952 Library numa found: YES 00:01:01.952 Has header "numaif.h" : YES 00:01:01.952 Library fdt found: NO 00:01:01.952 Library execinfo found: NO 00:01:01.952 Has header "execinfo.h" : YES 00:01:01.952 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:01.952 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:01.952 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:01.952 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:01.952 Run-time dependency openssl found: YES 3.0.9 00:01:01.952 Run-time dependency libpcap found: YES 1.10.4 00:01:01.952 Has header "pcap.h" with dependency libpcap: YES 00:01:01.952 Compiler for C supports arguments -Wcast-qual: YES 00:01:01.952 Compiler for C supports arguments -Wdeprecated: YES 00:01:01.952 Compiler for C supports arguments -Wformat: YES 00:01:01.952 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:01.952 Compiler for C supports arguments -Wformat-security: NO 00:01:01.952 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:01.952 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:01.952 Compiler for C supports arguments -Wnested-externs: YES 00:01:01.952 Compiler for C supports arguments -Wold-style-definition: YES 00:01:01.952 Compiler for C supports arguments -Wpointer-arith: YES 00:01:01.952 Compiler for C supports arguments -Wsign-compare: YES 00:01:01.952 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:01.952 Compiler for C supports arguments -Wundef: YES 00:01:01.952 Compiler for C supports arguments -Wwrite-strings: YES 00:01:01.952 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:01.952 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:01.952 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:01.952 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:01.952 Program objdump found: YES (/usr/bin/objdump) 00:01:01.952 Compiler for C supports arguments -mavx512f: YES 00:01:01.952 Checking if "AVX512 checking" compiles: YES 00:01:01.952 Fetching value of define "__SSE4_2__" : 1 00:01:01.952 Fetching value of define "__AES__" : 1 00:01:01.952 Fetching value of define "__AVX__" : 1 00:01:01.952 Fetching value of define "__AVX2__" : (undefined) 00:01:01.952 Fetching value of define "__AVX512BW__" : (undefined) 00:01:01.952 Fetching value of define "__AVX512CD__" : (undefined) 00:01:01.952 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:01.952 Fetching value of define "__AVX512F__" : (undefined) 00:01:01.952 Fetching value of define "__AVX512VL__" : (undefined) 00:01:01.952 Fetching value of define "__PCLMUL__" : 1 00:01:01.952 Fetching value of define "__RDRND__" : 1 00:01:01.952 Fetching value of define "__RDSEED__" : (undefined) 00:01:01.952 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:01.952 Fetching value of define "__znver1__" : (undefined) 00:01:01.952 Fetching value of define "__znver2__" : (undefined) 00:01:01.952 Fetching value of define "__znver3__" : (undefined) 00:01:01.952 Fetching value of define "__znver4__" : (undefined) 00:01:01.952 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:01.952 Message: lib/log: Defining dependency "log" 00:01:01.952 Message: lib/kvargs: Defining dependency "kvargs" 00:01:01.952 Message: lib/telemetry: Defining dependency "telemetry" 00:01:01.952 Checking for function "getentropy" : NO 00:01:01.952 Message: lib/eal: Defining dependency "eal" 00:01:01.952 Message: lib/ring: Defining dependency "ring" 00:01:01.952 Message: lib/rcu: Defining dependency "rcu" 00:01:01.952 Message: lib/mempool: Defining dependency "mempool" 00:01:01.952 Message: lib/mbuf: Defining dependency "mbuf" 00:01:01.952 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:01.952 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:01.952 Compiler for C supports arguments -mpclmul: YES 00:01:01.952 Compiler for C supports arguments -maes: YES 00:01:01.952 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:01.952 Compiler for C supports arguments -mavx512bw: YES 00:01:01.952 Compiler for C supports arguments -mavx512dq: YES 00:01:01.953 Compiler for C supports arguments -mavx512vl: YES 00:01:01.953 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:01.953 Compiler for C supports arguments -mavx2: YES 00:01:01.953 Compiler for C supports arguments -mavx: YES 00:01:01.953 Message: lib/net: Defining dependency "net" 00:01:01.953 Message: lib/meter: Defining dependency "meter" 00:01:01.953 Message: lib/ethdev: Defining dependency "ethdev" 00:01:01.953 Message: lib/pci: Defining dependency "pci" 00:01:01.953 Message: lib/cmdline: Defining dependency "cmdline" 00:01:01.953 Message: lib/hash: Defining dependency "hash" 00:01:01.953 Message: lib/timer: Defining dependency "timer" 00:01:01.953 Message: lib/compressdev: Defining dependency "compressdev" 00:01:01.953 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:01.953 Message: lib/dmadev: Defining dependency "dmadev" 00:01:01.953 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:01.953 Message: lib/power: Defining dependency "power" 00:01:01.953 Message: lib/reorder: Defining dependency "reorder" 00:01:01.953 Message: lib/security: Defining dependency "security" 00:01:01.953 Has header "linux/userfaultfd.h" : YES 00:01:01.953 Has header "linux/vduse.h" : YES 00:01:01.953 Message: lib/vhost: Defining dependency "vhost" 00:01:01.953 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:01.953 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:01.953 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:01.953 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:01.953 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:01.953 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:01.953 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:01.953 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:01.953 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:01.953 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:01.953 Program doxygen found: YES (/usr/bin/doxygen) 00:01:01.953 Configuring doxy-api-html.conf using configuration 00:01:01.953 Configuring doxy-api-man.conf using configuration 00:01:01.953 Program mandb found: YES (/usr/bin/mandb) 00:01:01.953 Program sphinx-build found: NO 00:01:01.953 Configuring rte_build_config.h using configuration 00:01:01.953 Message: 00:01:01.953 ================= 00:01:01.953 Applications Enabled 00:01:01.953 ================= 00:01:01.953 00:01:01.953 apps: 00:01:01.953 00:01:01.953 00:01:01.953 Message: 00:01:01.953 ================= 00:01:01.953 Libraries Enabled 00:01:01.953 ================= 00:01:01.953 00:01:01.953 libs: 00:01:01.953 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:01.953 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:01.953 cryptodev, dmadev, power, reorder, security, vhost, 00:01:01.953 00:01:01.953 Message: 00:01:01.953 =============== 00:01:01.953 Drivers Enabled 00:01:01.953 =============== 00:01:01.953 00:01:01.953 common: 00:01:01.953 00:01:01.953 bus: 00:01:01.953 pci, vdev, 00:01:01.953 mempool: 00:01:01.953 ring, 00:01:01.953 dma: 00:01:01.953 00:01:01.953 net: 00:01:01.953 00:01:01.953 crypto: 00:01:01.953 00:01:01.953 compress: 00:01:01.953 00:01:01.953 vdpa: 00:01:01.953 00:01:01.953 00:01:01.953 Message: 00:01:01.953 ================= 00:01:01.953 Content Skipped 00:01:01.953 ================= 00:01:01.953 00:01:01.953 apps: 00:01:01.953 dumpcap: explicitly disabled via build config 00:01:01.953 graph: explicitly disabled via build config 00:01:01.953 pdump: explicitly disabled via build config 00:01:01.953 proc-info: explicitly disabled via build config 00:01:01.953 test-acl: explicitly disabled via build config 00:01:01.953 test-bbdev: explicitly disabled via build config 00:01:01.953 test-cmdline: explicitly disabled via build config 00:01:01.953 test-compress-perf: explicitly disabled via build config 00:01:01.953 test-crypto-perf: explicitly disabled via build config 00:01:01.953 test-dma-perf: explicitly disabled via build config 00:01:01.953 test-eventdev: explicitly disabled via build config 00:01:01.953 test-fib: explicitly disabled via build config 00:01:01.953 test-flow-perf: explicitly disabled via build config 00:01:01.953 test-gpudev: explicitly disabled via build config 00:01:01.953 test-mldev: explicitly disabled via build config 00:01:01.953 test-pipeline: explicitly disabled via build config 00:01:01.953 test-pmd: explicitly disabled via build config 00:01:01.953 test-regex: explicitly disabled via build config 00:01:01.953 test-sad: explicitly disabled via build config 00:01:01.953 test-security-perf: explicitly disabled via build config 00:01:01.953 00:01:01.953 libs: 00:01:01.953 metrics: explicitly disabled via build config 00:01:01.953 acl: explicitly disabled via build config 00:01:01.953 bbdev: explicitly disabled via build config 00:01:01.953 bitratestats: explicitly disabled via build config 00:01:01.953 bpf: explicitly disabled via build config 00:01:01.953 cfgfile: explicitly disabled via build config 00:01:01.953 distributor: explicitly disabled via build config 00:01:01.953 efd: explicitly disabled via build config 00:01:01.953 eventdev: explicitly disabled via build config 00:01:01.953 dispatcher: explicitly disabled via build config 00:01:01.953 gpudev: explicitly disabled via build config 00:01:01.953 gro: explicitly disabled via build config 00:01:01.953 gso: explicitly disabled via build config 00:01:01.953 ip_frag: explicitly disabled via build config 00:01:01.953 jobstats: explicitly disabled via build config 00:01:01.953 latencystats: explicitly disabled via build config 00:01:01.953 lpm: explicitly disabled via build config 00:01:01.953 member: explicitly disabled via build config 00:01:01.953 pcapng: explicitly disabled via build config 00:01:01.953 rawdev: explicitly disabled via build config 00:01:01.953 regexdev: explicitly disabled via build config 00:01:01.953 mldev: explicitly disabled via build config 00:01:01.953 rib: explicitly disabled via build config 00:01:01.953 sched: explicitly disabled via build config 00:01:01.953 stack: explicitly disabled via build config 00:01:01.953 ipsec: explicitly disabled via build config 00:01:01.953 pdcp: explicitly disabled via build config 00:01:01.953 fib: explicitly disabled via build config 00:01:01.953 port: explicitly disabled via build config 00:01:01.953 pdump: explicitly disabled via build config 00:01:01.953 table: explicitly disabled via build config 00:01:01.953 pipeline: explicitly disabled via build config 00:01:01.953 graph: explicitly disabled via build config 00:01:01.953 node: explicitly disabled via build config 00:01:01.953 00:01:01.953 drivers: 00:01:01.953 common/cpt: not in enabled drivers build config 00:01:01.953 common/dpaax: not in enabled drivers build config 00:01:01.953 common/iavf: not in enabled drivers build config 00:01:01.953 common/idpf: not in enabled drivers build config 00:01:01.953 common/mvep: not in enabled drivers build config 00:01:01.953 common/octeontx: not in enabled drivers build config 00:01:01.953 bus/auxiliary: not in enabled drivers build config 00:01:01.953 bus/cdx: not in enabled drivers build config 00:01:01.953 bus/dpaa: not in enabled drivers build config 00:01:01.953 bus/fslmc: not in enabled drivers build config 00:01:01.953 bus/ifpga: not in enabled drivers build config 00:01:01.953 bus/platform: not in enabled drivers build config 00:01:01.953 bus/vmbus: not in enabled drivers build config 00:01:01.953 common/cnxk: not in enabled drivers build config 00:01:01.953 common/mlx5: not in enabled drivers build config 00:01:01.953 common/nfp: not in enabled drivers build config 00:01:01.953 common/qat: not in enabled drivers build config 00:01:01.953 common/sfc_efx: not in enabled drivers build config 00:01:01.953 mempool/bucket: not in enabled drivers build config 00:01:01.953 mempool/cnxk: not in enabled drivers build config 00:01:01.953 mempool/dpaa: not in enabled drivers build config 00:01:01.953 mempool/dpaa2: not in enabled drivers build config 00:01:01.953 mempool/octeontx: not in enabled drivers build config 00:01:01.953 mempool/stack: not in enabled drivers build config 00:01:01.953 dma/cnxk: not in enabled drivers build config 00:01:01.953 dma/dpaa: not in enabled drivers build config 00:01:01.953 dma/dpaa2: not in enabled drivers build config 00:01:01.953 dma/hisilicon: not in enabled drivers build config 00:01:01.953 dma/idxd: not in enabled drivers build config 00:01:01.953 dma/ioat: not in enabled drivers build config 00:01:01.953 dma/skeleton: not in enabled drivers build config 00:01:01.953 net/af_packet: not in enabled drivers build config 00:01:01.953 net/af_xdp: not in enabled drivers build config 00:01:01.953 net/ark: not in enabled drivers build config 00:01:01.953 net/atlantic: not in enabled drivers build config 00:01:01.953 net/avp: not in enabled drivers build config 00:01:01.953 net/axgbe: not in enabled drivers build config 00:01:01.953 net/bnx2x: not in enabled drivers build config 00:01:01.953 net/bnxt: not in enabled drivers build config 00:01:01.953 net/bonding: not in enabled drivers build config 00:01:01.953 net/cnxk: not in enabled drivers build config 00:01:01.953 net/cpfl: not in enabled drivers build config 00:01:01.953 net/cxgbe: not in enabled drivers build config 00:01:01.953 net/dpaa: not in enabled drivers build config 00:01:01.954 net/dpaa2: not in enabled drivers build config 00:01:01.954 net/e1000: not in enabled drivers build config 00:01:01.954 net/ena: not in enabled drivers build config 00:01:01.954 net/enetc: not in enabled drivers build config 00:01:01.954 net/enetfec: not in enabled drivers build config 00:01:01.954 net/enic: not in enabled drivers build config 00:01:01.954 net/failsafe: not in enabled drivers build config 00:01:01.954 net/fm10k: not in enabled drivers build config 00:01:01.954 net/gve: not in enabled drivers build config 00:01:01.954 net/hinic: not in enabled drivers build config 00:01:01.954 net/hns3: not in enabled drivers build config 00:01:01.954 net/i40e: not in enabled drivers build config 00:01:01.954 net/iavf: not in enabled drivers build config 00:01:01.954 net/ice: not in enabled drivers build config 00:01:01.954 net/idpf: not in enabled drivers build config 00:01:01.954 net/igc: not in enabled drivers build config 00:01:01.954 net/ionic: not in enabled drivers build config 00:01:01.954 net/ipn3ke: not in enabled drivers build config 00:01:01.954 net/ixgbe: not in enabled drivers build config 00:01:01.954 net/mana: not in enabled drivers build config 00:01:01.954 net/memif: not in enabled drivers build config 00:01:01.954 net/mlx4: not in enabled drivers build config 00:01:01.954 net/mlx5: not in enabled drivers build config 00:01:01.954 net/mvneta: not in enabled drivers build config 00:01:01.954 net/mvpp2: not in enabled drivers build config 00:01:01.954 net/netvsc: not in enabled drivers build config 00:01:01.954 net/nfb: not in enabled drivers build config 00:01:01.954 net/nfp: not in enabled drivers build config 00:01:01.954 net/ngbe: not in enabled drivers build config 00:01:01.954 net/null: not in enabled drivers build config 00:01:01.954 net/octeontx: not in enabled drivers build config 00:01:01.954 net/octeon_ep: not in enabled drivers build config 00:01:01.954 net/pcap: not in enabled drivers build config 00:01:01.954 net/pfe: not in enabled drivers build config 00:01:01.954 net/qede: not in enabled drivers build config 00:01:01.954 net/ring: not in enabled drivers build config 00:01:01.954 net/sfc: not in enabled drivers build config 00:01:01.954 net/softnic: not in enabled drivers build config 00:01:01.954 net/tap: not in enabled drivers build config 00:01:01.954 net/thunderx: not in enabled drivers build config 00:01:01.954 net/txgbe: not in enabled drivers build config 00:01:01.954 net/vdev_netvsc: not in enabled drivers build config 00:01:01.954 net/vhost: not in enabled drivers build config 00:01:01.954 net/virtio: not in enabled drivers build config 00:01:01.954 net/vmxnet3: not in enabled drivers build config 00:01:01.954 raw/*: missing internal dependency, "rawdev" 00:01:01.954 crypto/armv8: not in enabled drivers build config 00:01:01.954 crypto/bcmfs: not in enabled drivers build config 00:01:01.954 crypto/caam_jr: not in enabled drivers build config 00:01:01.954 crypto/ccp: not in enabled drivers build config 00:01:01.954 crypto/cnxk: not in enabled drivers build config 00:01:01.954 crypto/dpaa_sec: not in enabled drivers build config 00:01:01.954 crypto/dpaa2_sec: not in enabled drivers build config 00:01:01.954 crypto/ipsec_mb: not in enabled drivers build config 00:01:01.954 crypto/mlx5: not in enabled drivers build config 00:01:01.954 crypto/mvsam: not in enabled drivers build config 00:01:01.954 crypto/nitrox: not in enabled drivers build config 00:01:01.954 crypto/null: not in enabled drivers build config 00:01:01.954 crypto/octeontx: not in enabled drivers build config 00:01:01.954 crypto/openssl: not in enabled drivers build config 00:01:01.954 crypto/scheduler: not in enabled drivers build config 00:01:01.954 crypto/uadk: not in enabled drivers build config 00:01:01.954 crypto/virtio: not in enabled drivers build config 00:01:01.954 compress/isal: not in enabled drivers build config 00:01:01.954 compress/mlx5: not in enabled drivers build config 00:01:01.954 compress/octeontx: not in enabled drivers build config 00:01:01.954 compress/zlib: not in enabled drivers build config 00:01:01.954 regex/*: missing internal dependency, "regexdev" 00:01:01.954 ml/*: missing internal dependency, "mldev" 00:01:01.954 vdpa/ifc: not in enabled drivers build config 00:01:01.954 vdpa/mlx5: not in enabled drivers build config 00:01:01.954 vdpa/nfp: not in enabled drivers build config 00:01:01.954 vdpa/sfc: not in enabled drivers build config 00:01:01.954 event/*: missing internal dependency, "eventdev" 00:01:01.954 baseband/*: missing internal dependency, "bbdev" 00:01:01.954 gpu/*: missing internal dependency, "gpudev" 00:01:01.954 00:01:01.954 00:01:01.954 Build targets in project: 85 00:01:01.954 00:01:01.954 DPDK 23.11.0 00:01:01.954 00:01:01.954 User defined options 00:01:01.954 buildtype : debug 00:01:01.954 default_library : shared 00:01:01.954 libdir : lib 00:01:01.954 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:01.954 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:01.954 c_link_args : 00:01:01.954 cpu_instruction_set: native 00:01:01.954 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:01.954 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:01.954 enable_docs : false 00:01:01.954 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:01.954 enable_kmods : false 00:01:01.954 tests : false 00:01:01.954 00:01:01.954 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:01.954 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:01.954 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:01.954 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:01.954 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:01.954 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:01.954 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:01.954 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:01.954 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:01.954 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:01.954 [9/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:01.954 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:01.954 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:01.954 [12/265] Linking static target lib/librte_kvargs.a 00:01:01.954 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:01.954 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:01.954 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:01.954 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:01.954 [17/265] Linking static target lib/librte_log.a 00:01:01.954 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:01.954 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:01.954 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:02.216 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:02.790 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.790 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:02.790 [24/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:02.790 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:02.790 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:02.790 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:02.790 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:02.790 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:02.790 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:02.790 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:02.790 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:02.790 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:02.790 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:02.790 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:02.790 [36/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:02.790 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:02.790 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:02.790 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:02.790 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:02.790 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:02.790 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:02.790 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:02.790 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:02.790 [45/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:02.790 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:02.790 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:02.790 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:02.790 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:02.790 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:02.790 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:02.790 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:02.790 [53/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:02.790 [54/265] Linking static target lib/librte_telemetry.a 00:01:02.790 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:02.790 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:02.790 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:03.055 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:03.055 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:03.055 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:03.055 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:03.055 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:03.055 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:03.055 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:03.055 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:03.055 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:03.055 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:03.055 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:03.055 [69/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.055 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:03.055 [71/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:03.055 [72/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:03.055 [73/265] Linking static target lib/librte_pci.a 00:01:03.314 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:03.314 [75/265] Linking target lib/librte_log.so.24.0 00:01:03.314 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:03.314 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:03.314 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:03.314 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:03.314 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:03.314 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:03.314 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:03.314 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:03.314 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:03.314 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:03.314 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:03.575 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:03.575 [88/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:03.575 [89/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:03.575 [90/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:03.575 [91/265] Linking target lib/librte_kvargs.so.24.0 00:01:03.575 [92/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:03.837 [93/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:03.837 [94/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:03.837 [95/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:03.837 [96/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.837 [97/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:03.837 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:03.837 [99/265] Linking static target lib/librte_ring.a 00:01:03.837 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:03.837 [101/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:03.837 [102/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:03.837 [103/265] Linking static target lib/librte_meter.a 00:01:03.837 [104/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:03.837 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:03.837 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:03.837 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:03.837 [108/265] Linking static target lib/librte_eal.a 00:01:03.837 [109/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:03.837 [110/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:03.837 [111/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:03.837 [112/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:03.837 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:03.837 [114/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.098 [115/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:04.098 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:04.098 [117/265] Linking static target lib/librte_mempool.a 00:01:04.098 [118/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:04.098 [119/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:04.098 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:04.098 [121/265] Linking static target lib/librte_rcu.a 00:01:04.098 [122/265] Linking target lib/librte_telemetry.so.24.0 00:01:04.098 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:04.098 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:04.098 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:04.098 [126/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:04.098 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:04.098 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:04.098 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:04.098 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:04.098 [131/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:04.098 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:04.364 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:04.364 [134/265] Linking static target lib/librte_cmdline.a 00:01:04.364 [135/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.364 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:04.364 [137/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:04.364 [138/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:04.364 [139/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:04.364 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:04.364 [141/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.364 [142/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:04.364 [143/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:04.364 [144/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:04.364 [145/265] Linking static target lib/librte_timer.a 00:01:04.364 [146/265] Linking static target lib/librte_net.a 00:01:04.623 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:04.623 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:04.623 [149/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.623 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:04.623 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:04.623 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:04.881 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:04.881 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:04.881 [155/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.881 [156/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:04.881 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:04.881 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:04.881 [159/265] Linking static target lib/librte_dmadev.a 00:01:04.881 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:04.881 [161/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:04.881 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.881 [163/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:04.881 [164/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.140 [165/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:05.140 [166/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:05.140 [167/265] Linking static target lib/librte_hash.a 00:01:05.140 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:05.140 [169/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:05.140 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:05.140 [171/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:05.140 [172/265] Linking static target lib/librte_power.a 00:01:05.140 [173/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:05.140 [174/265] Linking static target lib/librte_compressdev.a 00:01:05.140 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:05.140 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:05.140 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:05.140 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:05.140 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:05.140 [180/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:05.140 [181/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:05.140 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:05.140 [183/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.398 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:05.398 [185/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.398 [186/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:05.398 [187/265] Linking static target lib/librte_reorder.a 00:01:05.398 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:05.398 [189/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:05.398 [190/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:05.398 [191/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:05.398 [192/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:05.398 [193/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:05.398 [194/265] Linking static target drivers/librte_bus_vdev.a 00:01:05.398 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:05.398 [196/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:05.398 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:05.655 [198/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.655 [199/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.655 [200/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:05.655 [201/265] Linking static target lib/librte_mbuf.a 00:01:05.655 [202/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:05.655 [203/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:05.655 [204/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:05.655 [205/265] Linking static target drivers/librte_mempool_ring.a 00:01:05.655 [206/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.655 [207/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.655 [208/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:05.655 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:05.655 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:05.655 [211/265] Linking static target drivers/librte_bus_pci.a 00:01:05.655 [212/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.655 [213/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:05.655 [214/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:05.655 [215/265] Linking static target lib/librte_security.a 00:01:05.911 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:05.911 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:05.911 [218/265] Linking static target lib/librte_ethdev.a 00:01:05.911 [219/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:05.911 [220/265] Linking static target lib/librte_cryptodev.a 00:01:05.911 [221/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.169 [222/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.169 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.102 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.035 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:09.935 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.193 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.193 [228/265] Linking target lib/librte_eal.so.24.0 00:01:10.193 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:10.451 [230/265] Linking target lib/librte_timer.so.24.0 00:01:10.451 [231/265] Linking target lib/librte_meter.so.24.0 00:01:10.451 [232/265] Linking target lib/librte_ring.so.24.0 00:01:10.451 [233/265] Linking target lib/librte_dmadev.so.24.0 00:01:10.451 [234/265] Linking target lib/librte_pci.so.24.0 00:01:10.451 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:10.451 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:10.451 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:10.451 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:10.451 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:10.451 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:10.451 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:10.451 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:10.451 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:10.709 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:10.709 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:10.709 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:10.709 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:10.709 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:10.709 [249/265] Linking target lib/librte_compressdev.so.24.0 00:01:10.709 [250/265] Linking target lib/librte_net.so.24.0 00:01:10.709 [251/265] Linking target lib/librte_reorder.so.24.0 00:01:10.709 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:10.968 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:10.968 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:10.968 [255/265] Linking target lib/librte_hash.so.24.0 00:01:10.968 [256/265] Linking target lib/librte_security.so.24.0 00:01:10.968 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:10.968 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:11.226 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:11.226 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:11.226 [261/265] Linking target lib/librte_power.so.24.0 00:01:13.806 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:13.806 [263/265] Linking static target lib/librte_vhost.a 00:01:14.739 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.739 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:14.739 INFO: autodetecting backend as ninja 00:01:14.739 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:15.682 CC lib/log/log.o 00:01:15.682 CC lib/log/log_flags.o 00:01:15.682 CC lib/log/log_deprecated.o 00:01:15.682 CC lib/ut_mock/mock.o 00:01:15.682 CC lib/ut/ut.o 00:01:15.682 LIB libspdk_ut_mock.a 00:01:15.682 SO libspdk_ut_mock.so.6.0 00:01:15.682 LIB libspdk_log.a 00:01:15.682 LIB libspdk_ut.a 00:01:15.682 SO libspdk_ut.so.2.0 00:01:15.682 SO libspdk_log.so.7.0 00:01:15.939 SYMLINK libspdk_ut_mock.so 00:01:15.939 SYMLINK libspdk_ut.so 00:01:15.939 SYMLINK libspdk_log.so 00:01:15.939 CXX lib/trace_parser/trace.o 00:01:15.939 CC lib/dma/dma.o 00:01:15.939 CC lib/ioat/ioat.o 00:01:15.939 CC lib/util/base64.o 00:01:15.939 CC lib/util/bit_array.o 00:01:15.939 CC lib/util/cpuset.o 00:01:15.939 CC lib/util/crc16.o 00:01:15.939 CC lib/util/crc32.o 00:01:15.939 CC lib/util/crc32c.o 00:01:15.939 CC lib/util/crc32_ieee.o 00:01:15.939 CC lib/util/crc64.o 00:01:15.939 CC lib/util/dif.o 00:01:15.939 CC lib/util/fd.o 00:01:15.939 CC lib/util/file.o 00:01:15.939 CC lib/util/hexlify.o 00:01:15.939 CC lib/util/iov.o 00:01:15.939 CC lib/util/math.o 00:01:15.939 CC lib/util/pipe.o 00:01:15.939 CC lib/util/strerror_tls.o 00:01:15.939 CC lib/util/string.o 00:01:15.939 CC lib/util/uuid.o 00:01:15.939 CC lib/util/fd_group.o 00:01:15.939 CC lib/util/xor.o 00:01:15.939 CC lib/util/zipf.o 00:01:16.198 CC lib/vfio_user/host/vfio_user_pci.o 00:01:16.198 CC lib/vfio_user/host/vfio_user.o 00:01:16.198 LIB libspdk_dma.a 00:01:16.198 SO libspdk_dma.so.4.0 00:01:16.198 SYMLINK libspdk_dma.so 00:01:16.456 LIB libspdk_ioat.a 00:01:16.456 SO libspdk_ioat.so.7.0 00:01:16.456 LIB libspdk_vfio_user.a 00:01:16.456 SYMLINK libspdk_ioat.so 00:01:16.456 SO libspdk_vfio_user.so.5.0 00:01:16.456 SYMLINK libspdk_vfio_user.so 00:01:16.456 LIB libspdk_util.a 00:01:16.714 SO libspdk_util.so.9.0 00:01:16.714 SYMLINK libspdk_util.so 00:01:16.973 CC lib/vmd/vmd.o 00:01:16.973 CC lib/conf/conf.o 00:01:16.973 CC lib/vmd/led.o 00:01:16.973 CC lib/rdma/common.o 00:01:16.973 CC lib/idxd/idxd.o 00:01:16.973 CC lib/json/json_parse.o 00:01:16.973 CC lib/env_dpdk/env.o 00:01:16.973 LIB libspdk_trace_parser.a 00:01:16.973 CC lib/idxd/idxd_user.o 00:01:16.973 CC lib/rdma/rdma_verbs.o 00:01:16.973 CC lib/json/json_util.o 00:01:16.973 CC lib/env_dpdk/memory.o 00:01:16.973 CC lib/json/json_write.o 00:01:16.973 CC lib/env_dpdk/pci.o 00:01:16.973 CC lib/env_dpdk/init.o 00:01:16.973 CC lib/env_dpdk/threads.o 00:01:16.973 CC lib/env_dpdk/pci_ioat.o 00:01:16.973 CC lib/env_dpdk/pci_virtio.o 00:01:16.973 CC lib/env_dpdk/pci_vmd.o 00:01:16.973 CC lib/env_dpdk/pci_idxd.o 00:01:16.973 CC lib/env_dpdk/pci_event.o 00:01:16.973 CC lib/env_dpdk/sigbus_handler.o 00:01:16.973 CC lib/env_dpdk/pci_dpdk.o 00:01:16.973 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:16.973 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:16.973 SO libspdk_trace_parser.so.5.0 00:01:16.973 SYMLINK libspdk_trace_parser.so 00:01:17.231 LIB libspdk_conf.a 00:01:17.231 SO libspdk_conf.so.6.0 00:01:17.231 LIB libspdk_rdma.a 00:01:17.231 SYMLINK libspdk_conf.so 00:01:17.231 LIB libspdk_json.a 00:01:17.231 SO libspdk_rdma.so.6.0 00:01:17.231 SO libspdk_json.so.6.0 00:01:17.231 SYMLINK libspdk_rdma.so 00:01:17.489 SYMLINK libspdk_json.so 00:01:17.489 LIB libspdk_idxd.a 00:01:17.489 CC lib/jsonrpc/jsonrpc_server.o 00:01:17.489 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:17.489 CC lib/jsonrpc/jsonrpc_client.o 00:01:17.489 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:17.489 SO libspdk_idxd.so.12.0 00:01:17.489 SYMLINK libspdk_idxd.so 00:01:17.748 LIB libspdk_vmd.a 00:01:17.748 SO libspdk_vmd.so.6.0 00:01:17.748 SYMLINK libspdk_vmd.so 00:01:17.748 LIB libspdk_jsonrpc.a 00:01:17.748 SO libspdk_jsonrpc.so.6.0 00:01:18.006 SYMLINK libspdk_jsonrpc.so 00:01:18.006 CC lib/rpc/rpc.o 00:01:18.264 LIB libspdk_rpc.a 00:01:18.264 SO libspdk_rpc.so.6.0 00:01:18.264 SYMLINK libspdk_rpc.so 00:01:18.522 CC lib/trace/trace.o 00:01:18.522 CC lib/keyring/keyring.o 00:01:18.522 CC lib/trace/trace_flags.o 00:01:18.522 CC lib/notify/notify.o 00:01:18.522 CC lib/keyring/keyring_rpc.o 00:01:18.522 CC lib/trace/trace_rpc.o 00:01:18.522 CC lib/notify/notify_rpc.o 00:01:18.779 LIB libspdk_notify.a 00:01:18.779 SO libspdk_notify.so.6.0 00:01:18.779 LIB libspdk_keyring.a 00:01:18.779 SYMLINK libspdk_notify.so 00:01:18.779 LIB libspdk_trace.a 00:01:18.779 SO libspdk_keyring.so.1.0 00:01:18.779 SO libspdk_trace.so.10.0 00:01:18.779 SYMLINK libspdk_keyring.so 00:01:18.779 SYMLINK libspdk_trace.so 00:01:19.037 LIB libspdk_env_dpdk.a 00:01:19.037 SO libspdk_env_dpdk.so.14.0 00:01:19.037 CC lib/thread/thread.o 00:01:19.037 CC lib/thread/iobuf.o 00:01:19.037 CC lib/sock/sock.o 00:01:19.037 CC lib/sock/sock_rpc.o 00:01:19.037 SYMLINK libspdk_env_dpdk.so 00:01:19.295 LIB libspdk_sock.a 00:01:19.295 SO libspdk_sock.so.9.0 00:01:19.553 SYMLINK libspdk_sock.so 00:01:19.553 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:19.553 CC lib/nvme/nvme_ctrlr.o 00:01:19.553 CC lib/nvme/nvme_fabric.o 00:01:19.553 CC lib/nvme/nvme_ns_cmd.o 00:01:19.553 CC lib/nvme/nvme_ns.o 00:01:19.553 CC lib/nvme/nvme_pcie_common.o 00:01:19.553 CC lib/nvme/nvme_pcie.o 00:01:19.553 CC lib/nvme/nvme_qpair.o 00:01:19.553 CC lib/nvme/nvme.o 00:01:19.553 CC lib/nvme/nvme_quirks.o 00:01:19.553 CC lib/nvme/nvme_transport.o 00:01:19.553 CC lib/nvme/nvme_discovery.o 00:01:19.553 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:19.553 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:19.553 CC lib/nvme/nvme_tcp.o 00:01:19.553 CC lib/nvme/nvme_opal.o 00:01:19.553 CC lib/nvme/nvme_io_msg.o 00:01:19.553 CC lib/nvme/nvme_poll_group.o 00:01:19.553 CC lib/nvme/nvme_zns.o 00:01:19.553 CC lib/nvme/nvme_stubs.o 00:01:19.553 CC lib/nvme/nvme_auth.o 00:01:19.553 CC lib/nvme/nvme_cuse.o 00:01:19.553 CC lib/nvme/nvme_vfio_user.o 00:01:19.553 CC lib/nvme/nvme_rdma.o 00:01:20.926 LIB libspdk_thread.a 00:01:20.926 SO libspdk_thread.so.10.0 00:01:20.926 SYMLINK libspdk_thread.so 00:01:20.926 CC lib/blob/blobstore.o 00:01:20.926 CC lib/accel/accel.o 00:01:20.926 CC lib/accel/accel_rpc.o 00:01:20.926 CC lib/blob/request.o 00:01:20.926 CC lib/accel/accel_sw.o 00:01:20.926 CC lib/blob/zeroes.o 00:01:20.926 CC lib/virtio/virtio.o 00:01:20.926 CC lib/blob/blob_bs_dev.o 00:01:20.926 CC lib/virtio/virtio_vhost_user.o 00:01:20.926 CC lib/vfu_tgt/tgt_endpoint.o 00:01:20.926 CC lib/init/json_config.o 00:01:20.926 CC lib/virtio/virtio_vfio_user.o 00:01:20.926 CC lib/init/subsystem.o 00:01:20.926 CC lib/vfu_tgt/tgt_rpc.o 00:01:20.926 CC lib/virtio/virtio_pci.o 00:01:20.926 CC lib/init/subsystem_rpc.o 00:01:20.926 CC lib/init/rpc.o 00:01:21.184 LIB libspdk_init.a 00:01:21.184 SO libspdk_init.so.5.0 00:01:21.184 LIB libspdk_virtio.a 00:01:21.184 LIB libspdk_vfu_tgt.a 00:01:21.184 SYMLINK libspdk_init.so 00:01:21.184 SO libspdk_vfu_tgt.so.3.0 00:01:21.184 SO libspdk_virtio.so.7.0 00:01:21.442 SYMLINK libspdk_vfu_tgt.so 00:01:21.442 SYMLINK libspdk_virtio.so 00:01:21.442 CC lib/event/app.o 00:01:21.442 CC lib/event/reactor.o 00:01:21.442 CC lib/event/log_rpc.o 00:01:21.442 CC lib/event/app_rpc.o 00:01:21.442 CC lib/event/scheduler_static.o 00:01:22.007 LIB libspdk_event.a 00:01:22.007 SO libspdk_event.so.13.0 00:01:22.007 SYMLINK libspdk_event.so 00:01:22.007 LIB libspdk_accel.a 00:01:22.007 LIB libspdk_nvme.a 00:01:22.007 SO libspdk_accel.so.15.0 00:01:22.007 SYMLINK libspdk_accel.so 00:01:22.266 SO libspdk_nvme.so.13.0 00:01:22.266 CC lib/bdev/bdev.o 00:01:22.266 CC lib/bdev/bdev_rpc.o 00:01:22.266 CC lib/bdev/bdev_zone.o 00:01:22.266 CC lib/bdev/part.o 00:01:22.266 CC lib/bdev/scsi_nvme.o 00:01:22.525 SYMLINK libspdk_nvme.so 00:01:23.898 LIB libspdk_blob.a 00:01:23.898 SO libspdk_blob.so.11.0 00:01:23.898 SYMLINK libspdk_blob.so 00:01:24.156 CC lib/blobfs/blobfs.o 00:01:24.156 CC lib/blobfs/tree.o 00:01:24.156 CC lib/lvol/lvol.o 00:01:25.091 LIB libspdk_blobfs.a 00:01:25.091 SO libspdk_blobfs.so.10.0 00:01:25.091 SYMLINK libspdk_blobfs.so 00:01:25.091 LIB libspdk_lvol.a 00:01:25.091 SO libspdk_lvol.so.10.0 00:01:25.091 SYMLINK libspdk_lvol.so 00:01:25.091 LIB libspdk_bdev.a 00:01:25.350 SO libspdk_bdev.so.15.0 00:01:25.350 SYMLINK libspdk_bdev.so 00:01:25.617 CC lib/nbd/nbd.o 00:01:25.617 CC lib/nvmf/ctrlr.o 00:01:25.617 CC lib/nbd/nbd_rpc.o 00:01:25.617 CC lib/scsi/dev.o 00:01:25.617 CC lib/ublk/ublk.o 00:01:25.617 CC lib/ublk/ublk_rpc.o 00:01:25.617 CC lib/nvmf/ctrlr_discovery.o 00:01:25.617 CC lib/ftl/ftl_core.o 00:01:25.617 CC lib/scsi/lun.o 00:01:25.617 CC lib/nvmf/ctrlr_bdev.o 00:01:25.617 CC lib/ftl/ftl_init.o 00:01:25.617 CC lib/scsi/port.o 00:01:25.617 CC lib/nvmf/subsystem.o 00:01:25.617 CC lib/ftl/ftl_layout.o 00:01:25.617 CC lib/scsi/scsi.o 00:01:25.617 CC lib/nvmf/nvmf.o 00:01:25.617 CC lib/ftl/ftl_debug.o 00:01:25.617 CC lib/scsi/scsi_bdev.o 00:01:25.617 CC lib/ftl/ftl_io.o 00:01:25.617 CC lib/nvmf/nvmf_rpc.o 00:01:25.617 CC lib/scsi/scsi_pr.o 00:01:25.617 CC lib/scsi/scsi_rpc.o 00:01:25.617 CC lib/ftl/ftl_sb.o 00:01:25.617 CC lib/scsi/task.o 00:01:25.617 CC lib/nvmf/transport.o 00:01:25.617 CC lib/ftl/ftl_l2p.o 00:01:25.617 CC lib/nvmf/stubs.o 00:01:25.617 CC lib/ftl/ftl_l2p_flat.o 00:01:25.617 CC lib/nvmf/tcp.o 00:01:25.617 CC lib/ftl/ftl_nv_cache.o 00:01:25.617 CC lib/nvmf/mdns_server.o 00:01:25.617 CC lib/ftl/ftl_band.o 00:01:25.617 CC lib/ftl/ftl_band_ops.o 00:01:25.617 CC lib/nvmf/vfio_user.o 00:01:25.617 CC lib/nvmf/auth.o 00:01:25.617 CC lib/nvmf/rdma.o 00:01:25.617 CC lib/ftl/ftl_writer.o 00:01:25.617 CC lib/ftl/ftl_rq.o 00:01:25.617 CC lib/ftl/ftl_reloc.o 00:01:25.617 CC lib/ftl/ftl_l2p_cache.o 00:01:25.617 CC lib/ftl/ftl_p2l.o 00:01:25.617 CC lib/ftl/mngt/ftl_mngt.o 00:01:25.617 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:25.617 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:25.617 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:25.617 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:25.617 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:25.617 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:25.875 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:25.875 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:25.875 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:25.875 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:25.875 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:25.875 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:25.875 CC lib/ftl/utils/ftl_conf.o 00:01:25.875 CC lib/ftl/utils/ftl_md.o 00:01:25.875 CC lib/ftl/utils/ftl_mempool.o 00:01:25.875 CC lib/ftl/utils/ftl_bitmap.o 00:01:25.876 CC lib/ftl/utils/ftl_property.o 00:01:25.876 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:25.876 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:25.876 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:25.876 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:26.135 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:26.135 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:26.135 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:26.135 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:26.135 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:26.135 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:26.135 CC lib/ftl/base/ftl_base_dev.o 00:01:26.135 CC lib/ftl/base/ftl_base_bdev.o 00:01:26.135 CC lib/ftl/ftl_trace.o 00:01:26.393 LIB libspdk_nbd.a 00:01:26.393 SO libspdk_nbd.so.7.0 00:01:26.393 SYMLINK libspdk_nbd.so 00:01:26.393 LIB libspdk_scsi.a 00:01:26.393 SO libspdk_scsi.so.9.0 00:01:26.651 LIB libspdk_ublk.a 00:01:26.651 SO libspdk_ublk.so.3.0 00:01:26.651 SYMLINK libspdk_scsi.so 00:01:26.651 SYMLINK libspdk_ublk.so 00:01:26.651 CC lib/vhost/vhost.o 00:01:26.651 CC lib/iscsi/conn.o 00:01:26.651 CC lib/vhost/vhost_rpc.o 00:01:26.651 CC lib/iscsi/init_grp.o 00:01:26.651 CC lib/vhost/vhost_scsi.o 00:01:26.651 CC lib/iscsi/iscsi.o 00:01:26.651 CC lib/vhost/vhost_blk.o 00:01:26.651 CC lib/iscsi/md5.o 00:01:26.651 CC lib/vhost/rte_vhost_user.o 00:01:26.651 CC lib/iscsi/param.o 00:01:26.651 CC lib/iscsi/portal_grp.o 00:01:26.651 CC lib/iscsi/tgt_node.o 00:01:26.651 CC lib/iscsi/iscsi_subsystem.o 00:01:26.651 CC lib/iscsi/iscsi_rpc.o 00:01:26.651 CC lib/iscsi/task.o 00:01:26.909 LIB libspdk_ftl.a 00:01:27.187 SO libspdk_ftl.so.9.0 00:01:27.472 SYMLINK libspdk_ftl.so 00:01:28.038 LIB libspdk_vhost.a 00:01:28.038 SO libspdk_vhost.so.8.0 00:01:28.038 LIB libspdk_nvmf.a 00:01:28.038 SO libspdk_nvmf.so.18.0 00:01:28.038 SYMLINK libspdk_vhost.so 00:01:28.297 LIB libspdk_iscsi.a 00:01:28.297 SO libspdk_iscsi.so.8.0 00:01:28.297 SYMLINK libspdk_nvmf.so 00:01:28.297 SYMLINK libspdk_iscsi.so 00:01:28.555 CC module/env_dpdk/env_dpdk_rpc.o 00:01:28.555 CC module/vfu_device/vfu_virtio.o 00:01:28.555 CC module/vfu_device/vfu_virtio_blk.o 00:01:28.555 CC module/vfu_device/vfu_virtio_scsi.o 00:01:28.555 CC module/vfu_device/vfu_virtio_rpc.o 00:01:28.813 CC module/blob/bdev/blob_bdev.o 00:01:28.813 CC module/sock/posix/posix.o 00:01:28.813 CC module/keyring/file/keyring.o 00:01:28.813 CC module/accel/error/accel_error.o 00:01:28.813 CC module/keyring/file/keyring_rpc.o 00:01:28.813 CC module/accel/error/accel_error_rpc.o 00:01:28.813 CC module/accel/dsa/accel_dsa.o 00:01:28.813 CC module/accel/iaa/accel_iaa.o 00:01:28.813 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:28.813 CC module/accel/dsa/accel_dsa_rpc.o 00:01:28.813 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:28.813 CC module/accel/iaa/accel_iaa_rpc.o 00:01:28.813 CC module/accel/ioat/accel_ioat.o 00:01:28.814 CC module/accel/ioat/accel_ioat_rpc.o 00:01:28.814 CC module/scheduler/gscheduler/gscheduler.o 00:01:28.814 LIB libspdk_env_dpdk_rpc.a 00:01:28.814 SO libspdk_env_dpdk_rpc.so.6.0 00:01:28.814 SYMLINK libspdk_env_dpdk_rpc.so 00:01:28.814 LIB libspdk_keyring_file.a 00:01:28.814 LIB libspdk_scheduler_gscheduler.a 00:01:28.814 LIB libspdk_scheduler_dpdk_governor.a 00:01:29.072 SO libspdk_scheduler_gscheduler.so.4.0 00:01:29.072 SO libspdk_keyring_file.so.1.0 00:01:29.072 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:29.072 LIB libspdk_accel_error.a 00:01:29.072 LIB libspdk_accel_ioat.a 00:01:29.072 LIB libspdk_scheduler_dynamic.a 00:01:29.072 LIB libspdk_accel_iaa.a 00:01:29.072 SO libspdk_accel_error.so.2.0 00:01:29.072 SYMLINK libspdk_scheduler_gscheduler.so 00:01:29.072 SO libspdk_scheduler_dynamic.so.4.0 00:01:29.072 SO libspdk_accel_ioat.so.6.0 00:01:29.072 SYMLINK libspdk_keyring_file.so 00:01:29.072 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:29.072 SO libspdk_accel_iaa.so.3.0 00:01:29.072 LIB libspdk_accel_dsa.a 00:01:29.072 SYMLINK libspdk_accel_error.so 00:01:29.072 LIB libspdk_blob_bdev.a 00:01:29.072 SYMLINK libspdk_scheduler_dynamic.so 00:01:29.072 SO libspdk_accel_dsa.so.5.0 00:01:29.072 SYMLINK libspdk_accel_ioat.so 00:01:29.072 SYMLINK libspdk_accel_iaa.so 00:01:29.072 SO libspdk_blob_bdev.so.11.0 00:01:29.072 SYMLINK libspdk_accel_dsa.so 00:01:29.072 SYMLINK libspdk_blob_bdev.so 00:01:29.331 LIB libspdk_vfu_device.a 00:01:29.331 CC module/bdev/error/vbdev_error.o 00:01:29.331 CC module/bdev/aio/bdev_aio.o 00:01:29.331 CC module/bdev/error/vbdev_error_rpc.o 00:01:29.331 CC module/bdev/split/vbdev_split.o 00:01:29.331 CC module/bdev/raid/bdev_raid.o 00:01:29.331 CC module/bdev/iscsi/bdev_iscsi.o 00:01:29.331 CC module/bdev/split/vbdev_split_rpc.o 00:01:29.331 CC module/bdev/delay/vbdev_delay.o 00:01:29.331 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:29.331 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:29.331 CC module/bdev/passthru/vbdev_passthru.o 00:01:29.331 CC module/bdev/gpt/gpt.o 00:01:29.331 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:29.331 CC module/bdev/malloc/bdev_malloc.o 00:01:29.331 CC module/bdev/null/bdev_null.o 00:01:29.331 CC module/blobfs/bdev/blobfs_bdev.o 00:01:29.331 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:29.331 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:29.331 CC module/bdev/gpt/vbdev_gpt.o 00:01:29.331 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:29.331 CC module/bdev/nvme/bdev_nvme.o 00:01:29.331 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:29.331 CC module/bdev/lvol/vbdev_lvol.o 00:01:29.331 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:29.331 CC module/bdev/aio/bdev_aio_rpc.o 00:01:29.331 CC module/bdev/null/bdev_null_rpc.o 00:01:29.331 CC module/bdev/raid/bdev_raid_rpc.o 00:01:29.331 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:29.331 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:29.331 CC module/bdev/raid/bdev_raid_sb.o 00:01:29.331 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:29.331 CC module/bdev/raid/raid0.o 00:01:29.331 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:29.331 CC module/bdev/raid/raid1.o 00:01:29.331 CC module/bdev/nvme/nvme_rpc.o 00:01:29.331 CC module/bdev/nvme/bdev_mdns_client.o 00:01:29.331 CC module/bdev/raid/concat.o 00:01:29.331 CC module/bdev/nvme/vbdev_opal.o 00:01:29.331 CC module/bdev/ftl/bdev_ftl.o 00:01:29.331 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:29.331 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:29.331 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:29.331 SO libspdk_vfu_device.so.3.0 00:01:29.590 SYMLINK libspdk_vfu_device.so 00:01:29.590 LIB libspdk_sock_posix.a 00:01:29.590 SO libspdk_sock_posix.so.6.0 00:01:29.848 LIB libspdk_blobfs_bdev.a 00:01:29.848 SYMLINK libspdk_sock_posix.so 00:01:29.848 SO libspdk_blobfs_bdev.so.6.0 00:01:29.848 LIB libspdk_bdev_error.a 00:01:29.848 LIB libspdk_bdev_split.a 00:01:29.848 SYMLINK libspdk_blobfs_bdev.so 00:01:29.848 LIB libspdk_bdev_null.a 00:01:29.848 SO libspdk_bdev_error.so.6.0 00:01:29.848 SO libspdk_bdev_split.so.6.0 00:01:29.848 LIB libspdk_bdev_ftl.a 00:01:29.848 SO libspdk_bdev_null.so.6.0 00:01:29.848 LIB libspdk_bdev_aio.a 00:01:29.848 LIB libspdk_bdev_gpt.a 00:01:29.848 SO libspdk_bdev_ftl.so.6.0 00:01:29.848 LIB libspdk_bdev_zone_block.a 00:01:29.848 SO libspdk_bdev_aio.so.6.0 00:01:29.848 SYMLINK libspdk_bdev_error.so 00:01:29.848 SO libspdk_bdev_gpt.so.6.0 00:01:29.848 SYMLINK libspdk_bdev_split.so 00:01:29.848 LIB libspdk_bdev_passthru.a 00:01:29.848 SO libspdk_bdev_zone_block.so.6.0 00:01:29.848 SYMLINK libspdk_bdev_null.so 00:01:29.848 LIB libspdk_bdev_malloc.a 00:01:29.848 SO libspdk_bdev_passthru.so.6.0 00:01:29.848 SYMLINK libspdk_bdev_aio.so 00:01:29.848 SYMLINK libspdk_bdev_ftl.so 00:01:29.848 SO libspdk_bdev_malloc.so.6.0 00:01:30.106 SYMLINK libspdk_bdev_gpt.so 00:01:30.106 SYMLINK libspdk_bdev_zone_block.so 00:01:30.106 LIB libspdk_bdev_iscsi.a 00:01:30.106 LIB libspdk_bdev_delay.a 00:01:30.106 SYMLINK libspdk_bdev_passthru.so 00:01:30.106 SYMLINK libspdk_bdev_malloc.so 00:01:30.106 SO libspdk_bdev_iscsi.so.6.0 00:01:30.106 SO libspdk_bdev_delay.so.6.0 00:01:30.106 SYMLINK libspdk_bdev_iscsi.so 00:01:30.106 SYMLINK libspdk_bdev_delay.so 00:01:30.107 LIB libspdk_bdev_lvol.a 00:01:30.107 LIB libspdk_bdev_virtio.a 00:01:30.107 SO libspdk_bdev_lvol.so.6.0 00:01:30.107 SO libspdk_bdev_virtio.so.6.0 00:01:30.107 SYMLINK libspdk_bdev_lvol.so 00:01:30.107 SYMLINK libspdk_bdev_virtio.so 00:01:30.672 LIB libspdk_bdev_raid.a 00:01:30.672 SO libspdk_bdev_raid.so.6.0 00:01:30.672 SYMLINK libspdk_bdev_raid.so 00:01:32.049 LIB libspdk_bdev_nvme.a 00:01:32.049 SO libspdk_bdev_nvme.so.7.0 00:01:32.049 SYMLINK libspdk_bdev_nvme.so 00:01:32.616 CC module/event/subsystems/sock/sock.o 00:01:32.616 CC module/event/subsystems/keyring/keyring.o 00:01:32.616 CC module/event/subsystems/scheduler/scheduler.o 00:01:32.616 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:32.616 CC module/event/subsystems/iobuf/iobuf.o 00:01:32.616 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:32.616 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:32.616 CC module/event/subsystems/vmd/vmd.o 00:01:32.616 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:32.616 LIB libspdk_event_keyring.a 00:01:32.616 LIB libspdk_event_sock.a 00:01:32.616 LIB libspdk_event_vhost_blk.a 00:01:32.616 LIB libspdk_event_scheduler.a 00:01:32.616 LIB libspdk_event_vmd.a 00:01:32.616 LIB libspdk_event_iobuf.a 00:01:32.616 SO libspdk_event_sock.so.5.0 00:01:32.616 SO libspdk_event_keyring.so.1.0 00:01:32.616 SO libspdk_event_vhost_blk.so.3.0 00:01:32.616 SO libspdk_event_scheduler.so.4.0 00:01:32.616 SO libspdk_event_vmd.so.6.0 00:01:32.616 LIB libspdk_event_vfu_tgt.a 00:01:32.616 SO libspdk_event_iobuf.so.3.0 00:01:32.616 SO libspdk_event_vfu_tgt.so.3.0 00:01:32.616 SYMLINK libspdk_event_keyring.so 00:01:32.616 SYMLINK libspdk_event_sock.so 00:01:32.616 SYMLINK libspdk_event_vhost_blk.so 00:01:32.616 SYMLINK libspdk_event_scheduler.so 00:01:32.616 SYMLINK libspdk_event_vmd.so 00:01:32.875 SYMLINK libspdk_event_iobuf.so 00:01:32.875 SYMLINK libspdk_event_vfu_tgt.so 00:01:32.875 CC module/event/subsystems/accel/accel.o 00:01:33.134 LIB libspdk_event_accel.a 00:01:33.134 SO libspdk_event_accel.so.6.0 00:01:33.134 SYMLINK libspdk_event_accel.so 00:01:33.392 CC module/event/subsystems/bdev/bdev.o 00:01:33.392 LIB libspdk_event_bdev.a 00:01:33.392 SO libspdk_event_bdev.so.6.0 00:01:33.651 SYMLINK libspdk_event_bdev.so 00:01:33.651 CC module/event/subsystems/ublk/ublk.o 00:01:33.651 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:33.651 CC module/event/subsystems/nbd/nbd.o 00:01:33.651 CC module/event/subsystems/scsi/scsi.o 00:01:33.651 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:33.909 LIB libspdk_event_ublk.a 00:01:33.909 LIB libspdk_event_nbd.a 00:01:33.909 LIB libspdk_event_scsi.a 00:01:33.909 SO libspdk_event_ublk.so.3.0 00:01:33.909 SO libspdk_event_nbd.so.6.0 00:01:33.909 SO libspdk_event_scsi.so.6.0 00:01:33.909 SYMLINK libspdk_event_ublk.so 00:01:33.909 SYMLINK libspdk_event_nbd.so 00:01:33.909 SYMLINK libspdk_event_scsi.so 00:01:33.909 LIB libspdk_event_nvmf.a 00:01:33.909 SO libspdk_event_nvmf.so.6.0 00:01:34.166 SYMLINK libspdk_event_nvmf.so 00:01:34.166 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:34.166 CC module/event/subsystems/iscsi/iscsi.o 00:01:34.166 LIB libspdk_event_vhost_scsi.a 00:01:34.425 LIB libspdk_event_iscsi.a 00:01:34.425 SO libspdk_event_vhost_scsi.so.3.0 00:01:34.425 SO libspdk_event_iscsi.so.6.0 00:01:34.425 SYMLINK libspdk_event_vhost_scsi.so 00:01:34.425 SYMLINK libspdk_event_iscsi.so 00:01:34.425 SO libspdk.so.6.0 00:01:34.425 SYMLINK libspdk.so 00:01:34.687 CC app/trace_record/trace_record.o 00:01:34.687 CC app/spdk_top/spdk_top.o 00:01:34.687 CXX app/trace/trace.o 00:01:34.687 TEST_HEADER include/spdk/accel.h 00:01:34.687 CC app/spdk_lspci/spdk_lspci.o 00:01:34.687 CC app/spdk_nvme_identify/identify.o 00:01:34.687 CC app/spdk_nvme_perf/perf.o 00:01:34.687 CC app/spdk_nvme_discover/discovery_aer.o 00:01:34.687 TEST_HEADER include/spdk/accel_module.h 00:01:34.687 CC test/rpc_client/rpc_client_test.o 00:01:34.687 TEST_HEADER include/spdk/assert.h 00:01:34.687 TEST_HEADER include/spdk/barrier.h 00:01:34.687 TEST_HEADER include/spdk/base64.h 00:01:34.687 TEST_HEADER include/spdk/bdev.h 00:01:34.687 TEST_HEADER include/spdk/bdev_module.h 00:01:34.687 TEST_HEADER include/spdk/bdev_zone.h 00:01:34.687 TEST_HEADER include/spdk/bit_array.h 00:01:34.687 TEST_HEADER include/spdk/bit_pool.h 00:01:34.687 TEST_HEADER include/spdk/blob_bdev.h 00:01:34.687 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:34.687 TEST_HEADER include/spdk/blobfs.h 00:01:34.687 TEST_HEADER include/spdk/blob.h 00:01:34.687 TEST_HEADER include/spdk/conf.h 00:01:34.687 TEST_HEADER include/spdk/config.h 00:01:34.687 TEST_HEADER include/spdk/cpuset.h 00:01:34.687 TEST_HEADER include/spdk/crc16.h 00:01:34.687 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:34.687 TEST_HEADER include/spdk/crc32.h 00:01:34.687 TEST_HEADER include/spdk/crc64.h 00:01:34.687 CC app/spdk_dd/spdk_dd.o 00:01:34.687 TEST_HEADER include/spdk/dif.h 00:01:34.687 TEST_HEADER include/spdk/dma.h 00:01:34.687 CC app/iscsi_tgt/iscsi_tgt.o 00:01:34.687 TEST_HEADER include/spdk/endian.h 00:01:34.687 CC app/nvmf_tgt/nvmf_main.o 00:01:34.687 TEST_HEADER include/spdk/env_dpdk.h 00:01:34.687 TEST_HEADER include/spdk/env.h 00:01:34.687 TEST_HEADER include/spdk/event.h 00:01:34.687 TEST_HEADER include/spdk/fd_group.h 00:01:34.687 TEST_HEADER include/spdk/fd.h 00:01:34.687 CC app/vhost/vhost.o 00:01:34.947 TEST_HEADER include/spdk/file.h 00:01:34.947 TEST_HEADER include/spdk/ftl.h 00:01:34.947 TEST_HEADER include/spdk/gpt_spec.h 00:01:34.947 TEST_HEADER include/spdk/hexlify.h 00:01:34.947 TEST_HEADER include/spdk/histogram_data.h 00:01:34.947 TEST_HEADER include/spdk/idxd.h 00:01:34.947 TEST_HEADER include/spdk/idxd_spec.h 00:01:34.947 TEST_HEADER include/spdk/init.h 00:01:34.947 CC examples/ioat/verify/verify.o 00:01:34.947 CC examples/ioat/perf/perf.o 00:01:34.947 TEST_HEADER include/spdk/ioat.h 00:01:34.947 CC app/spdk_tgt/spdk_tgt.o 00:01:34.947 TEST_HEADER include/spdk/ioat_spec.h 00:01:34.947 TEST_HEADER include/spdk/iscsi_spec.h 00:01:34.947 CC examples/nvme/hotplug/hotplug.o 00:01:34.947 TEST_HEADER include/spdk/json.h 00:01:34.947 CC examples/accel/perf/accel_perf.o 00:01:34.947 CC examples/nvme/arbitration/arbitration.o 00:01:34.947 CC examples/idxd/perf/perf.o 00:01:34.947 CC examples/nvme/abort/abort.o 00:01:34.947 TEST_HEADER include/spdk/jsonrpc.h 00:01:34.947 CC examples/nvme/hello_world/hello_world.o 00:01:34.947 CC examples/vmd/lsvmd/lsvmd.o 00:01:34.947 TEST_HEADER include/spdk/keyring.h 00:01:34.947 CC examples/util/zipf/zipf.o 00:01:34.947 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:34.947 CC examples/nvme/reconnect/reconnect.o 00:01:34.947 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:34.947 TEST_HEADER include/spdk/keyring_module.h 00:01:34.947 CC test/thread/poller_perf/poller_perf.o 00:01:34.947 TEST_HEADER include/spdk/likely.h 00:01:34.947 CC examples/vmd/led/led.o 00:01:34.947 CC app/fio/nvme/fio_plugin.o 00:01:34.947 TEST_HEADER include/spdk/log.h 00:01:34.947 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:34.947 CC test/event/event_perf/event_perf.o 00:01:34.947 CC examples/sock/hello_world/hello_sock.o 00:01:34.947 TEST_HEADER include/spdk/lvol.h 00:01:34.947 TEST_HEADER include/spdk/memory.h 00:01:34.947 CC test/nvme/aer/aer.o 00:01:34.947 TEST_HEADER include/spdk/mmio.h 00:01:34.947 TEST_HEADER include/spdk/nbd.h 00:01:34.947 TEST_HEADER include/spdk/notify.h 00:01:34.947 TEST_HEADER include/spdk/nvme.h 00:01:34.947 TEST_HEADER include/spdk/nvme_intel.h 00:01:34.947 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:34.947 CC examples/blob/cli/blobcli.o 00:01:34.947 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:34.947 TEST_HEADER include/spdk/nvme_spec.h 00:01:34.947 TEST_HEADER include/spdk/nvme_zns.h 00:01:34.947 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:34.947 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:34.947 CC examples/bdev/hello_world/hello_bdev.o 00:01:34.947 TEST_HEADER include/spdk/nvmf.h 00:01:34.947 CC examples/thread/thread/thread_ex.o 00:01:34.947 TEST_HEADER include/spdk/nvmf_spec.h 00:01:34.947 TEST_HEADER include/spdk/nvmf_transport.h 00:01:34.947 TEST_HEADER include/spdk/opal.h 00:01:34.947 CC examples/blob/hello_world/hello_blob.o 00:01:34.947 CC examples/bdev/bdevperf/bdevperf.o 00:01:34.947 CC examples/nvmf/nvmf/nvmf.o 00:01:34.947 CC test/bdev/bdevio/bdevio.o 00:01:34.947 CC test/dma/test_dma/test_dma.o 00:01:34.947 TEST_HEADER include/spdk/opal_spec.h 00:01:34.947 TEST_HEADER include/spdk/pci_ids.h 00:01:34.947 CC test/accel/dif/dif.o 00:01:34.947 CC test/app/bdev_svc/bdev_svc.o 00:01:34.947 TEST_HEADER include/spdk/pipe.h 00:01:34.947 CC test/blobfs/mkfs/mkfs.o 00:01:34.947 TEST_HEADER include/spdk/queue.h 00:01:34.947 TEST_HEADER include/spdk/reduce.h 00:01:34.947 TEST_HEADER include/spdk/rpc.h 00:01:34.947 TEST_HEADER include/spdk/scheduler.h 00:01:34.947 TEST_HEADER include/spdk/scsi.h 00:01:34.947 TEST_HEADER include/spdk/scsi_spec.h 00:01:34.947 TEST_HEADER include/spdk/sock.h 00:01:34.947 TEST_HEADER include/spdk/stdinc.h 00:01:34.947 TEST_HEADER include/spdk/string.h 00:01:34.947 TEST_HEADER include/spdk/thread.h 00:01:34.947 TEST_HEADER include/spdk/trace.h 00:01:34.947 TEST_HEADER include/spdk/trace_parser.h 00:01:34.947 LINK spdk_lspci 00:01:34.947 TEST_HEADER include/spdk/tree.h 00:01:34.947 TEST_HEADER include/spdk/ublk.h 00:01:34.947 TEST_HEADER include/spdk/util.h 00:01:34.947 TEST_HEADER include/spdk/uuid.h 00:01:34.947 TEST_HEADER include/spdk/version.h 00:01:34.947 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:34.947 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:34.947 TEST_HEADER include/spdk/vhost.h 00:01:34.947 TEST_HEADER include/spdk/vmd.h 00:01:34.947 CC test/lvol/esnap/esnap.o 00:01:34.947 TEST_HEADER include/spdk/xor.h 00:01:34.947 TEST_HEADER include/spdk/zipf.h 00:01:34.947 CXX test/cpp_headers/accel.o 00:01:34.947 CC test/env/mem_callbacks/mem_callbacks.o 00:01:35.212 LINK rpc_client_test 00:01:35.212 LINK spdk_nvme_discover 00:01:35.212 LINK interrupt_tgt 00:01:35.212 LINK lsvmd 00:01:35.212 LINK nvmf_tgt 00:01:35.212 LINK poller_perf 00:01:35.212 LINK event_perf 00:01:35.212 LINK zipf 00:01:35.212 LINK vhost 00:01:35.212 LINK led 00:01:35.212 LINK spdk_trace_record 00:01:35.212 LINK iscsi_tgt 00:01:35.212 LINK pmr_persistence 00:01:35.212 LINK cmb_copy 00:01:35.212 LINK ioat_perf 00:01:35.212 LINK verify 00:01:35.212 LINK spdk_tgt 00:01:35.212 LINK hello_world 00:01:35.212 LINK hotplug 00:01:35.477 LINK bdev_svc 00:01:35.477 LINK mkfs 00:01:35.477 LINK hello_sock 00:01:35.477 LINK hello_bdev 00:01:35.477 LINK hello_blob 00:01:35.477 LINK thread 00:01:35.477 CXX test/cpp_headers/accel_module.o 00:01:35.477 LINK aer 00:01:35.477 LINK spdk_dd 00:01:35.477 CXX test/cpp_headers/assert.o 00:01:35.477 LINK idxd_perf 00:01:35.477 LINK arbitration 00:01:35.477 LINK reconnect 00:01:35.477 LINK nvmf 00:01:35.477 CC app/fio/bdev/fio_plugin.o 00:01:35.477 CXX test/cpp_headers/barrier.o 00:01:35.477 LINK abort 00:01:35.477 CXX test/cpp_headers/base64.o 00:01:35.477 LINK spdk_trace 00:01:35.748 CC test/env/vtophys/vtophys.o 00:01:35.748 CC test/event/reactor/reactor.o 00:01:35.748 CC test/nvme/reset/reset.o 00:01:35.748 CC test/app/histogram_perf/histogram_perf.o 00:01:35.748 LINK bdevio 00:01:35.748 CC test/app/jsoncat/jsoncat.o 00:01:35.748 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:35.748 LINK dif 00:01:35.748 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:35.748 LINK test_dma 00:01:35.748 CC test/nvme/sgl/sgl.o 00:01:35.748 CC test/env/memory/memory_ut.o 00:01:35.748 LINK accel_perf 00:01:35.748 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:35.748 CC test/app/stub/stub.o 00:01:35.748 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:35.748 CC test/nvme/overhead/overhead.o 00:01:35.748 CC test/nvme/e2edp/nvme_dp.o 00:01:35.748 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:35.748 CXX test/cpp_headers/bdev.o 00:01:35.748 CC test/env/pci/pci_ut.o 00:01:35.748 CC test/nvme/err_injection/err_injection.o 00:01:35.748 CXX test/cpp_headers/bdev_module.o 00:01:35.748 LINK nvme_manage 00:01:35.748 LINK blobcli 00:01:36.015 CC test/nvme/startup/startup.o 00:01:36.015 LINK spdk_nvme 00:01:36.015 LINK vtophys 00:01:36.015 LINK reactor 00:01:36.015 CC test/nvme/simple_copy/simple_copy.o 00:01:36.015 CC test/nvme/reserve/reserve.o 00:01:36.015 CXX test/cpp_headers/bdev_zone.o 00:01:36.015 LINK jsoncat 00:01:36.015 CXX test/cpp_headers/bit_array.o 00:01:36.015 CC test/event/reactor_perf/reactor_perf.o 00:01:36.015 LINK histogram_perf 00:01:36.015 CC test/nvme/connect_stress/connect_stress.o 00:01:36.015 CC test/event/app_repeat/app_repeat.o 00:01:36.015 CC test/nvme/boot_partition/boot_partition.o 00:01:36.015 LINK env_dpdk_post_init 00:01:36.015 CC test/nvme/compliance/nvme_compliance.o 00:01:36.015 CC test/nvme/fused_ordering/fused_ordering.o 00:01:36.015 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:36.015 CXX test/cpp_headers/bit_pool.o 00:01:36.015 CXX test/cpp_headers/blob_bdev.o 00:01:36.015 CXX test/cpp_headers/blobfs_bdev.o 00:01:36.015 CXX test/cpp_headers/blobfs.o 00:01:36.015 CC test/event/scheduler/scheduler.o 00:01:36.015 CXX test/cpp_headers/blob.o 00:01:36.015 CC test/nvme/cuse/cuse.o 00:01:36.015 CC test/nvme/fdp/fdp.o 00:01:36.015 LINK stub 00:01:36.279 CXX test/cpp_headers/conf.o 00:01:36.279 LINK reset 00:01:36.279 CXX test/cpp_headers/config.o 00:01:36.279 LINK err_injection 00:01:36.279 LINK sgl 00:01:36.279 CXX test/cpp_headers/cpuset.o 00:01:36.279 LINK startup 00:01:36.279 LINK mem_callbacks 00:01:36.279 CXX test/cpp_headers/crc16.o 00:01:36.279 CXX test/cpp_headers/crc32.o 00:01:36.279 CXX test/cpp_headers/crc64.o 00:01:36.279 LINK spdk_nvme_perf 00:01:36.279 CXX test/cpp_headers/dif.o 00:01:36.279 LINK reactor_perf 00:01:36.279 CXX test/cpp_headers/dma.o 00:01:36.279 CXX test/cpp_headers/endian.o 00:01:36.279 CXX test/cpp_headers/env_dpdk.o 00:01:36.279 CXX test/cpp_headers/env.o 00:01:36.279 LINK spdk_nvme_identify 00:01:36.279 CXX test/cpp_headers/event.o 00:01:36.279 LINK connect_stress 00:01:36.279 LINK app_repeat 00:01:36.279 CXX test/cpp_headers/fd_group.o 00:01:36.279 LINK boot_partition 00:01:36.279 LINK nvme_dp 00:01:36.279 LINK overhead 00:01:36.279 LINK reserve 00:01:36.543 LINK spdk_top 00:01:36.543 LINK simple_copy 00:01:36.543 LINK bdevperf 00:01:36.543 CXX test/cpp_headers/fd.o 00:01:36.543 CXX test/cpp_headers/file.o 00:01:36.543 CXX test/cpp_headers/ftl.o 00:01:36.543 CXX test/cpp_headers/gpt_spec.o 00:01:36.543 LINK fused_ordering 00:01:36.543 CXX test/cpp_headers/hexlify.o 00:01:36.543 CXX test/cpp_headers/histogram_data.o 00:01:36.543 LINK doorbell_aers 00:01:36.543 CXX test/cpp_headers/idxd.o 00:01:36.543 LINK nvme_fuzz 00:01:36.543 CXX test/cpp_headers/idxd_spec.o 00:01:36.543 LINK spdk_bdev 00:01:36.543 CXX test/cpp_headers/init.o 00:01:36.543 CXX test/cpp_headers/ioat.o 00:01:36.543 CXX test/cpp_headers/ioat_spec.o 00:01:36.543 CXX test/cpp_headers/iscsi_spec.o 00:01:36.543 LINK pci_ut 00:01:36.543 LINK scheduler 00:01:36.543 CXX test/cpp_headers/json.o 00:01:36.543 LINK vhost_fuzz 00:01:36.543 CXX test/cpp_headers/jsonrpc.o 00:01:36.543 CXX test/cpp_headers/keyring.o 00:01:36.543 CXX test/cpp_headers/keyring_module.o 00:01:36.543 CXX test/cpp_headers/likely.o 00:01:36.543 CXX test/cpp_headers/log.o 00:01:36.543 CXX test/cpp_headers/lvol.o 00:01:36.814 CXX test/cpp_headers/memory.o 00:01:36.814 CXX test/cpp_headers/mmio.o 00:01:36.814 CXX test/cpp_headers/nbd.o 00:01:36.814 CXX test/cpp_headers/notify.o 00:01:36.815 CXX test/cpp_headers/nvme.o 00:01:36.815 CXX test/cpp_headers/nvme_intel.o 00:01:36.815 CXX test/cpp_headers/nvme_ocssd.o 00:01:36.815 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:36.815 LINK nvme_compliance 00:01:36.815 CXX test/cpp_headers/nvme_spec.o 00:01:36.815 CXX test/cpp_headers/nvme_zns.o 00:01:36.815 CXX test/cpp_headers/nvmf_cmd.o 00:01:36.815 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:36.815 CXX test/cpp_headers/nvmf.o 00:01:36.815 CXX test/cpp_headers/nvmf_spec.o 00:01:36.815 CXX test/cpp_headers/nvmf_transport.o 00:01:36.815 CXX test/cpp_headers/opal.o 00:01:36.815 CXX test/cpp_headers/opal_spec.o 00:01:36.815 LINK fdp 00:01:36.815 CXX test/cpp_headers/pci_ids.o 00:01:36.815 CXX test/cpp_headers/pipe.o 00:01:36.815 CXX test/cpp_headers/queue.o 00:01:36.815 CXX test/cpp_headers/reduce.o 00:01:36.815 CXX test/cpp_headers/rpc.o 00:01:36.815 CXX test/cpp_headers/scheduler.o 00:01:36.815 CXX test/cpp_headers/scsi.o 00:01:36.815 CXX test/cpp_headers/scsi_spec.o 00:01:36.815 CXX test/cpp_headers/sock.o 00:01:36.815 CXX test/cpp_headers/stdinc.o 00:01:36.815 CXX test/cpp_headers/string.o 00:01:36.815 CXX test/cpp_headers/thread.o 00:01:36.815 CXX test/cpp_headers/trace.o 00:01:36.815 CXX test/cpp_headers/trace_parser.o 00:01:36.815 CXX test/cpp_headers/tree.o 00:01:36.815 CXX test/cpp_headers/ublk.o 00:01:36.815 CXX test/cpp_headers/util.o 00:01:37.077 CXX test/cpp_headers/version.o 00:01:37.077 CXX test/cpp_headers/uuid.o 00:01:37.077 CXX test/cpp_headers/vfio_user_pci.o 00:01:37.077 CXX test/cpp_headers/vfio_user_spec.o 00:01:37.077 CXX test/cpp_headers/vhost.o 00:01:37.077 CXX test/cpp_headers/vmd.o 00:01:37.077 CXX test/cpp_headers/xor.o 00:01:37.077 CXX test/cpp_headers/zipf.o 00:01:37.336 LINK memory_ut 00:01:37.594 LINK cuse 00:01:38.160 LINK iscsi_fuzz 00:01:40.689 LINK esnap 00:01:40.949 00:01:40.949 real 0m48.661s 00:01:40.949 user 10m4.260s 00:01:40.949 sys 2m27.155s 00:01:40.949 04:02:28 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:40.949 04:02:28 make -- common/autotest_common.sh@10 -- $ set +x 00:01:40.949 ************************************ 00:01:40.949 END TEST make 00:01:40.949 ************************************ 00:01:40.949 04:02:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:40.949 04:02:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:40.949 04:02:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:40.949 04:02:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:40.949 04:02:28 -- pm/common@44 -- $ pid=3155414 00:01:40.949 04:02:28 -- pm/common@50 -- $ kill -TERM 3155414 00:01:40.949 04:02:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:40.949 04:02:28 -- pm/common@44 -- $ pid=3155416 00:01:40.949 04:02:28 -- pm/common@50 -- $ kill -TERM 3155416 00:01:40.949 04:02:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:40.949 04:02:28 -- pm/common@44 -- $ pid=3155418 00:01:40.949 04:02:28 -- pm/common@50 -- $ kill -TERM 3155418 00:01:40.949 04:02:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:40.949 04:02:28 -- pm/common@44 -- $ pid=3155453 00:01:40.949 04:02:28 -- pm/common@50 -- $ sudo -E kill -TERM 3155453 00:01:40.949 04:02:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:40.949 04:02:28 -- nvmf/common.sh@7 -- # uname -s 00:01:40.949 04:02:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:40.949 04:02:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:40.949 04:02:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:40.949 04:02:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:40.949 04:02:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:40.949 04:02:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:40.949 04:02:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:40.949 04:02:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:40.949 04:02:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:40.949 04:02:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:40.949 04:02:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:40.949 04:02:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:40.949 04:02:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:40.949 04:02:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:40.949 04:02:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:40.949 04:02:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:40.949 04:02:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:40.949 04:02:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:40.949 04:02:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.949 04:02:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.949 04:02:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.949 04:02:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.949 04:02:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.949 04:02:28 -- paths/export.sh@5 -- # export PATH 00:01:40.949 04:02:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.949 04:02:28 -- nvmf/common.sh@47 -- # : 0 00:01:40.949 04:02:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:40.949 04:02:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:40.949 04:02:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:40.949 04:02:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:40.949 04:02:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:40.949 04:02:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:40.949 04:02:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:40.949 04:02:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:40.949 04:02:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:40.949 04:02:28 -- spdk/autotest.sh@32 -- # uname -s 00:01:40.949 04:02:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:40.949 04:02:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:40.949 04:02:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:40.949 04:02:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:40.949 04:02:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:40.949 04:02:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:40.949 04:02:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:40.949 04:02:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:40.949 04:02:28 -- spdk/autotest.sh@48 -- # udevadm_pid=3210156 00:01:40.949 04:02:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:40.949 04:02:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:40.949 04:02:28 -- pm/common@17 -- # local monitor 00:01:40.949 04:02:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@21 -- # date +%s 00:01:40.949 04:02:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.949 04:02:28 -- pm/common@21 -- # date +%s 00:01:40.949 04:02:28 -- pm/common@25 -- # sleep 1 00:01:40.949 04:02:28 -- pm/common@21 -- # date +%s 00:01:40.949 04:02:28 -- pm/common@21 -- # date +%s 00:01:40.949 04:02:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715738548 00:01:40.949 04:02:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715738548 00:01:40.949 04:02:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715738548 00:01:40.949 04:02:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715738548 00:01:40.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715738548_collect-vmstat.pm.log 00:01:40.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715738548_collect-cpu-temp.pm.log 00:01:40.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715738548_collect-cpu-load.pm.log 00:01:40.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715738548_collect-bmc-pm.bmc.pm.log 00:01:41.889 04:02:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:41.889 04:02:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:41.889 04:02:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:41.889 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:01:41.889 04:02:29 -- spdk/autotest.sh@59 -- # create_test_list 00:01:41.889 04:02:29 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:41.889 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:01:42.147 04:02:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:42.147 04:02:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.147 04:02:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.147 04:02:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:42.147 04:02:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.147 04:02:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:42.147 04:02:29 -- common/autotest_common.sh@1451 -- # uname 00:01:42.147 04:02:29 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:42.147 04:02:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:42.147 04:02:29 -- common/autotest_common.sh@1471 -- # uname 00:01:42.147 04:02:29 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:42.147 04:02:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:42.147 04:02:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:42.147 04:02:29 -- spdk/autotest.sh@72 -- # hash lcov 00:01:42.147 04:02:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:42.147 04:02:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:42.147 --rc lcov_branch_coverage=1 00:01:42.147 --rc lcov_function_coverage=1 00:01:42.147 --rc genhtml_branch_coverage=1 00:01:42.147 --rc genhtml_function_coverage=1 00:01:42.147 --rc genhtml_legend=1 00:01:42.147 --rc geninfo_all_blocks=1 00:01:42.147 ' 00:01:42.147 04:02:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:42.147 --rc lcov_branch_coverage=1 00:01:42.147 --rc lcov_function_coverage=1 00:01:42.147 --rc genhtml_branch_coverage=1 00:01:42.147 --rc genhtml_function_coverage=1 00:01:42.147 --rc genhtml_legend=1 00:01:42.147 --rc geninfo_all_blocks=1 00:01:42.147 ' 00:01:42.147 04:02:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:42.147 --rc lcov_branch_coverage=1 00:01:42.147 --rc lcov_function_coverage=1 00:01:42.147 --rc genhtml_branch_coverage=1 00:01:42.147 --rc genhtml_function_coverage=1 00:01:42.147 --rc genhtml_legend=1 00:01:42.147 --rc geninfo_all_blocks=1 00:01:42.147 --no-external' 00:01:42.147 04:02:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:42.147 --rc lcov_branch_coverage=1 00:01:42.147 --rc lcov_function_coverage=1 00:01:42.147 --rc genhtml_branch_coverage=1 00:01:42.147 --rc genhtml_function_coverage=1 00:01:42.147 --rc genhtml_legend=1 00:01:42.147 --rc geninfo_all_blocks=1 00:01:42.147 --no-external' 00:01:42.147 04:02:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:42.147 lcov: LCOV version 1.14 00:01:42.148 04:02:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:57.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:57.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:57.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:57.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:57.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:57.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:57.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:57.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:15.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:15.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:15.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:15.165 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:15.165 04:03:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:15.165 04:03:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:15.165 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:02:15.165 04:03:02 -- spdk/autotest.sh@91 -- # rm -f 00:02:15.165 04:03:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:16.100 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:16.359 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:16.359 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:16.359 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:16.359 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:16.359 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:16.359 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:16.359 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:16.359 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:16.359 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:16.359 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:16.359 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:16.359 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:16.359 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:16.359 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:16.359 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:16.359 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:16.618 04:03:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:16.618 04:03:04 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:16.618 04:03:04 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:16.618 04:03:04 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:16.618 04:03:04 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:16.618 04:03:04 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:16.618 04:03:04 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:16.618 04:03:04 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:16.618 04:03:04 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:16.618 04:03:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:16.618 04:03:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:16.618 04:03:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:16.618 04:03:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:16.618 04:03:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:16.618 04:03:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:16.618 No valid GPT data, bailing 00:02:16.618 04:03:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:16.618 04:03:04 -- scripts/common.sh@391 -- # pt= 00:02:16.618 04:03:04 -- scripts/common.sh@392 -- # return 1 00:02:16.618 04:03:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:16.618 1+0 records in 00:02:16.618 1+0 records out 00:02:16.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0026523 s, 395 MB/s 00:02:16.618 04:03:04 -- spdk/autotest.sh@118 -- # sync 00:02:16.618 04:03:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:16.618 04:03:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:16.618 04:03:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:18.527 04:03:06 -- spdk/autotest.sh@124 -- # uname -s 00:02:18.527 04:03:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:18.527 04:03:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:18.527 04:03:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:18.527 04:03:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:18.527 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:02:18.527 ************************************ 00:02:18.527 START TEST setup.sh 00:02:18.527 ************************************ 00:02:18.527 04:03:06 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:18.527 * Looking for test storage... 00:02:18.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:18.527 04:03:06 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:18.527 04:03:06 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:18.527 04:03:06 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:18.527 04:03:06 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:18.527 04:03:06 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:18.527 04:03:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:18.527 ************************************ 00:02:18.527 START TEST acl 00:02:18.527 ************************************ 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:18.527 * Looking for test storage... 00:02:18.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:18.527 04:03:06 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:18.527 04:03:06 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:18.527 04:03:06 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:18.527 04:03:06 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:18.527 04:03:06 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:18.527 04:03:06 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:18.527 04:03:06 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:18.527 04:03:06 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:18.527 04:03:06 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.429 04:03:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:20.429 04:03:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:20.429 04:03:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.429 04:03:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:20.429 04:03:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:20.429 04:03:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:21.437 Hugepages 00:02:21.437 node hugesize free / total 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 00:02:21.437 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:21.437 04:03:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:21.437 04:03:09 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:21.437 04:03:09 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:21.437 04:03:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:21.694 ************************************ 00:02:21.694 START TEST denied 00:02:21.694 ************************************ 00:02:21.694 04:03:09 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:21.694 04:03:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:21.694 04:03:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:21.694 04:03:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:21.694 04:03:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:21.694 04:03:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:23.064 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:23.064 04:03:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:23.065 04:03:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.065 04:03:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.619 00:02:25.619 real 0m4.030s 00:02:25.619 user 0m1.157s 00:02:25.619 sys 0m2.026s 00:02:25.619 04:03:13 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:25.619 04:03:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:25.619 ************************************ 00:02:25.619 END TEST denied 00:02:25.619 ************************************ 00:02:25.619 04:03:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:25.619 04:03:13 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:25.619 04:03:13 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:25.619 04:03:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:25.619 ************************************ 00:02:25.619 START TEST allowed 00:02:25.619 ************************************ 00:02:25.619 04:03:13 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:25.619 04:03:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:25.619 04:03:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:25.619 04:03:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:25.619 04:03:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.619 04:03:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:28.149 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:28.149 04:03:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:28.149 04:03:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:28.149 04:03:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:28.149 04:03:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.149 04:03:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.049 00:02:30.049 real 0m4.093s 00:02:30.049 user 0m1.127s 00:02:30.049 sys 0m1.890s 00:02:30.049 04:03:17 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:30.049 04:03:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:30.049 ************************************ 00:02:30.049 END TEST allowed 00:02:30.049 ************************************ 00:02:30.049 00:02:30.049 real 0m11.184s 00:02:30.049 user 0m3.482s 00:02:30.049 sys 0m5.864s 00:02:30.049 04:03:17 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:30.049 04:03:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:30.049 ************************************ 00:02:30.049 END TEST acl 00:02:30.049 ************************************ 00:02:30.049 04:03:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:30.049 04:03:17 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:30.049 04:03:17 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:30.049 04:03:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:30.049 ************************************ 00:02:30.049 START TEST hugepages 00:02:30.049 ************************************ 00:02:30.049 04:03:17 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:30.050 * Looking for test storage... 00:02:30.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35508572 kB' 'MemAvailable: 40194916 kB' 'Buffers: 2696 kB' 'Cached: 18384388 kB' 'SwapCached: 0 kB' 'Active: 14375204 kB' 'Inactive: 4470784 kB' 'Active(anon): 13786044 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462260 kB' 'Mapped: 222752 kB' 'Shmem: 13327140 kB' 'KReclaimable: 240044 kB' 'Slab: 632008 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391964 kB' 'KernelStack: 12992 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14913628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198764 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.050 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.051 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.052 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:30.053 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.053 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.053 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.053 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.053 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:30.053 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:30.053 04:03:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:30.053 04:03:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:30.053 04:03:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:30.053 04:03:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:30.053 ************************************ 00:02:30.053 START TEST default_setup 00:02:30.053 ************************************ 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.053 04:03:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:31.427 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:31.428 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:31.428 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:31.428 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:31.428 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:31.428 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:31.428 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:31.428 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:31.428 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:32.368 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37626916 kB' 'MemAvailable: 42313260 kB' 'Buffers: 2696 kB' 'Cached: 18384476 kB' 'SwapCached: 0 kB' 'Active: 14395012 kB' 'Inactive: 4470784 kB' 'Active(anon): 13805852 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481864 kB' 'Mapped: 222796 kB' 'Shmem: 13327228 kB' 'KReclaimable: 240044 kB' 'Slab: 631340 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391296 kB' 'KernelStack: 13152 kB' 'PageTables: 9400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.368 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37632348 kB' 'MemAvailable: 42318692 kB' 'Buffers: 2696 kB' 'Cached: 18384476 kB' 'SwapCached: 0 kB' 'Active: 14395316 kB' 'Inactive: 4470784 kB' 'Active(anon): 13806156 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482172 kB' 'Mapped: 222740 kB' 'Shmem: 13327228 kB' 'KReclaimable: 240044 kB' 'Slab: 631344 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391300 kB' 'KernelStack: 12912 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.369 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.370 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37632688 kB' 'MemAvailable: 42319032 kB' 'Buffers: 2696 kB' 'Cached: 18384492 kB' 'SwapCached: 0 kB' 'Active: 14393652 kB' 'Inactive: 4470784 kB' 'Active(anon): 13804492 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480560 kB' 'Mapped: 222780 kB' 'Shmem: 13327244 kB' 'KReclaimable: 240044 kB' 'Slab: 631448 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391404 kB' 'KernelStack: 12848 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.371 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.372 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:32.373 nr_hugepages=1024 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:32.373 resv_hugepages=0 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:32.373 surplus_hugepages=0 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:32.373 anon_hugepages=0 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37635308 kB' 'MemAvailable: 42321652 kB' 'Buffers: 2696 kB' 'Cached: 18384520 kB' 'SwapCached: 0 kB' 'Active: 14394100 kB' 'Inactive: 4470784 kB' 'Active(anon): 13804940 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481032 kB' 'Mapped: 222780 kB' 'Shmem: 13327272 kB' 'KReclaimable: 240044 kB' 'Slab: 631448 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391404 kB' 'KernelStack: 12912 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.373 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.374 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21421728 kB' 'MemUsed: 11408156 kB' 'SwapCached: 0 kB' 'Active: 8126824 kB' 'Inactive: 187208 kB' 'Active(anon): 7730668 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8097744 kB' 'Mapped: 118648 kB' 'AnonPages: 219564 kB' 'Shmem: 7514380 kB' 'KernelStack: 7848 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115400 kB' 'Slab: 319580 kB' 'SReclaimable: 115400 kB' 'SUnreclaim: 204180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.375 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:32.376 node0=1024 expecting 1024 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:32.376 00:02:32.376 real 0m2.498s 00:02:32.376 user 0m0.638s 00:02:32.376 sys 0m0.865s 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:32.376 04:03:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:32.376 ************************************ 00:02:32.376 END TEST default_setup 00:02:32.376 ************************************ 00:02:32.376 04:03:20 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:32.377 04:03:20 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:32.377 04:03:20 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:32.377 04:03:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:32.635 ************************************ 00:02:32.635 START TEST per_node_1G_alloc 00:02:32.635 ************************************ 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.635 04:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:34.016 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:34.016 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:34.016 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:34.016 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:34.016 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:34.016 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:34.016 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:34.016 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:34.016 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:34.016 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:34.016 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:34.016 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:34.016 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:34.016 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:34.016 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:34.016 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:34.016 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.016 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37630632 kB' 'MemAvailable: 42316976 kB' 'Buffers: 2696 kB' 'Cached: 18384604 kB' 'SwapCached: 0 kB' 'Active: 14394284 kB' 'Inactive: 4470784 kB' 'Active(anon): 13805124 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481012 kB' 'Mapped: 224492 kB' 'Shmem: 13327356 kB' 'KReclaimable: 240044 kB' 'Slab: 631644 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391600 kB' 'KernelStack: 13008 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14972268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.017 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37630632 kB' 'MemAvailable: 42316976 kB' 'Buffers: 2696 kB' 'Cached: 18384608 kB' 'SwapCached: 0 kB' 'Active: 14397424 kB' 'Inactive: 4470784 kB' 'Active(anon): 13808264 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484164 kB' 'Mapped: 224844 kB' 'Shmem: 13327360 kB' 'KReclaimable: 240044 kB' 'Slab: 631668 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391624 kB' 'KernelStack: 12960 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14974796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.018 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.019 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37629916 kB' 'MemAvailable: 42316260 kB' 'Buffers: 2696 kB' 'Cached: 18384620 kB' 'SwapCached: 0 kB' 'Active: 14399352 kB' 'Inactive: 4470784 kB' 'Active(anon): 13810192 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485972 kB' 'Mapped: 224336 kB' 'Shmem: 13327372 kB' 'KReclaimable: 240044 kB' 'Slab: 631684 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391640 kB' 'KernelStack: 13024 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14977204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.020 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.021 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:34.022 nr_hugepages=1024 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:34.022 resv_hugepages=0 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:34.022 surplus_hugepages=0 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:34.022 anon_hugepages=0 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37634980 kB' 'MemAvailable: 42321324 kB' 'Buffers: 2696 kB' 'Cached: 18384648 kB' 'SwapCached: 0 kB' 'Active: 14394132 kB' 'Inactive: 4470784 kB' 'Active(anon): 13804972 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480772 kB' 'Mapped: 224316 kB' 'Shmem: 13327400 kB' 'KReclaimable: 240044 kB' 'Slab: 631660 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391616 kB' 'KernelStack: 13056 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14971108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.022 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.023 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22466760 kB' 'MemUsed: 10363124 kB' 'SwapCached: 0 kB' 'Active: 8126164 kB' 'Inactive: 187208 kB' 'Active(anon): 7730008 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8097804 kB' 'Mapped: 119068 kB' 'AnonPages: 218660 kB' 'Shmem: 7514440 kB' 'KernelStack: 7896 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115400 kB' 'Slab: 319628 kB' 'SReclaimable: 115400 kB' 'SUnreclaim: 204228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.024 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.025 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15168256 kB' 'MemUsed: 12543588 kB' 'SwapCached: 0 kB' 'Active: 6268564 kB' 'Inactive: 4283576 kB' 'Active(anon): 6075560 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10289568 kB' 'Mapped: 104832 kB' 'AnonPages: 262744 kB' 'Shmem: 5812988 kB' 'KernelStack: 5160 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124644 kB' 'Slab: 312032 kB' 'SReclaimable: 124644 kB' 'SUnreclaim: 187388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.026 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:34.027 node0=512 expecting 512 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:34.027 node1=512 expecting 512 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:34.027 00:02:34.027 real 0m1.557s 00:02:34.027 user 0m0.674s 00:02:34.027 sys 0m0.848s 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:34.027 04:03:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:34.027 ************************************ 00:02:34.027 END TEST per_node_1G_alloc 00:02:34.027 ************************************ 00:02:34.027 04:03:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:34.027 04:03:21 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.027 04:03:21 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.027 04:03:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:34.027 ************************************ 00:02:34.027 START TEST even_2G_alloc 00:02:34.027 ************************************ 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:34.027 04:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:34.027 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:34.028 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.028 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:34.028 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:34.028 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:34.028 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.028 04:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:35.401 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.401 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:35.401 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.401 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.401 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.401 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.401 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.401 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.401 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.401 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.401 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.401 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.401 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.401 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.401 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.401 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.401 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:35.401 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.662 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37628196 kB' 'MemAvailable: 42314540 kB' 'Buffers: 2696 kB' 'Cached: 18384736 kB' 'SwapCached: 0 kB' 'Active: 14387644 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798484 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474196 kB' 'Mapped: 223104 kB' 'Shmem: 13327488 kB' 'KReclaimable: 240044 kB' 'Slab: 631592 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391548 kB' 'KernelStack: 12928 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14944968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199052 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.663 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37630080 kB' 'MemAvailable: 42316424 kB' 'Buffers: 2696 kB' 'Cached: 18384740 kB' 'SwapCached: 0 kB' 'Active: 14387940 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798780 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474500 kB' 'Mapped: 223104 kB' 'Shmem: 13327492 kB' 'KReclaimable: 240044 kB' 'Slab: 631584 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391540 kB' 'KernelStack: 12896 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14944988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.664 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37633580 kB' 'MemAvailable: 42319924 kB' 'Buffers: 2696 kB' 'Cached: 18384756 kB' 'SwapCached: 0 kB' 'Active: 14387404 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798244 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473968 kB' 'Mapped: 223060 kB' 'Shmem: 13327508 kB' 'KReclaimable: 240044 kB' 'Slab: 631560 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391516 kB' 'KernelStack: 12928 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14944640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.665 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:35.666 nr_hugepages=1024 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:35.666 resv_hugepages=0 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:35.666 surplus_hugepages=0 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:35.666 anon_hugepages=0 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37636692 kB' 'MemAvailable: 42323036 kB' 'Buffers: 2696 kB' 'Cached: 18384756 kB' 'SwapCached: 0 kB' 'Active: 14388296 kB' 'Inactive: 4470784 kB' 'Active(anon): 13799136 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474944 kB' 'Mapped: 223060 kB' 'Shmem: 13327508 kB' 'KReclaimable: 240044 kB' 'Slab: 631560 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391516 kB' 'KernelStack: 12944 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14948912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.666 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.667 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22471652 kB' 'MemUsed: 10358232 kB' 'SwapCached: 0 kB' 'Active: 8123304 kB' 'Inactive: 187208 kB' 'Active(anon): 7727148 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8097808 kB' 'Mapped: 118376 kB' 'AnonPages: 215856 kB' 'Shmem: 7514444 kB' 'KernelStack: 7864 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115400 kB' 'Slab: 319484 kB' 'SReclaimable: 115400 kB' 'SUnreclaim: 204084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.668 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15167460 kB' 'MemUsed: 12544384 kB' 'SwapCached: 0 kB' 'Active: 6264816 kB' 'Inactive: 4283576 kB' 'Active(anon): 6071812 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10289644 kB' 'Mapped: 104752 kB' 'AnonPages: 258748 kB' 'Shmem: 5813064 kB' 'KernelStack: 5288 kB' 'PageTables: 4936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124644 kB' 'Slab: 312060 kB' 'SReclaimable: 124644 kB' 'SUnreclaim: 187416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.669 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:35.670 node0=512 expecting 512 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:35.670 node1=512 expecting 512 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:35.670 00:02:35.670 real 0m1.562s 00:02:35.670 user 0m0.667s 00:02:35.670 sys 0m0.861s 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:35.670 04:03:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:35.670 ************************************ 00:02:35.670 END TEST even_2G_alloc 00:02:35.670 ************************************ 00:02:35.670 04:03:23 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:35.670 04:03:23 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:35.670 04:03:23 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:35.670 04:03:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:35.670 ************************************ 00:02:35.670 START TEST odd_alloc 00:02:35.670 ************************************ 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.670 04:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.046 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.046 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.046 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.046 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.046 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.046 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.046 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.046 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.046 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.046 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.046 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.046 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.046 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.046 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.046 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.046 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.046 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37610436 kB' 'MemAvailable: 42296780 kB' 'Buffers: 2696 kB' 'Cached: 18384876 kB' 'SwapCached: 0 kB' 'Active: 14387440 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798280 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473892 kB' 'Mapped: 223088 kB' 'Shmem: 13327628 kB' 'KReclaimable: 240044 kB' 'Slab: 631440 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391396 kB' 'KernelStack: 12928 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14945236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199068 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.046 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37623232 kB' 'MemAvailable: 42309576 kB' 'Buffers: 2696 kB' 'Cached: 18384880 kB' 'SwapCached: 0 kB' 'Active: 14387772 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798612 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474220 kB' 'Mapped: 223088 kB' 'Shmem: 13327632 kB' 'KReclaimable: 240044 kB' 'Slab: 631428 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391384 kB' 'KernelStack: 12912 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14945252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.047 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37623916 kB' 'MemAvailable: 42310260 kB' 'Buffers: 2696 kB' 'Cached: 18384896 kB' 'SwapCached: 0 kB' 'Active: 14387716 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798556 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474144 kB' 'Mapped: 223076 kB' 'Shmem: 13327648 kB' 'KReclaimable: 240044 kB' 'Slab: 631428 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391384 kB' 'KernelStack: 12928 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14945276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.048 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.049 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:37.311 nr_hugepages=1025 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.311 resv_hugepages=0 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.311 surplus_hugepages=0 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.311 anon_hugepages=0 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37623732 kB' 'MemAvailable: 42310076 kB' 'Buffers: 2696 kB' 'Cached: 18384916 kB' 'SwapCached: 0 kB' 'Active: 14387676 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798516 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474060 kB' 'Mapped: 223076 kB' 'Shmem: 13327668 kB' 'KReclaimable: 240044 kB' 'Slab: 631428 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391384 kB' 'KernelStack: 12928 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14945296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.311 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.312 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22469644 kB' 'MemUsed: 10360240 kB' 'SwapCached: 0 kB' 'Active: 8123684 kB' 'Inactive: 187208 kB' 'Active(anon): 7727528 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8097828 kB' 'Mapped: 118324 kB' 'AnonPages: 216208 kB' 'Shmem: 7514464 kB' 'KernelStack: 7896 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115400 kB' 'Slab: 319512 kB' 'SReclaimable: 115400 kB' 'SUnreclaim: 204112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.313 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15153836 kB' 'MemUsed: 12558008 kB' 'SwapCached: 0 kB' 'Active: 6263964 kB' 'Inactive: 4283576 kB' 'Active(anon): 6070960 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10289824 kB' 'Mapped: 104752 kB' 'AnonPages: 257816 kB' 'Shmem: 5813244 kB' 'KernelStack: 5016 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124644 kB' 'Slab: 311916 kB' 'SReclaimable: 124644 kB' 'SUnreclaim: 187272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.314 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.315 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:37.316 node0=512 expecting 513 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:37.316 node1=513 expecting 512 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:37.316 00:02:37.316 real 0m1.513s 00:02:37.316 user 0m0.661s 00:02:37.316 sys 0m0.816s 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:37.316 04:03:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:37.316 ************************************ 00:02:37.316 END TEST odd_alloc 00:02:37.316 ************************************ 00:02:37.316 04:03:25 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:37.316 04:03:25 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:37.316 04:03:25 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:37.316 04:03:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.316 ************************************ 00:02:37.316 START TEST custom_alloc 00:02:37.316 ************************************ 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.316 04:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.692 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.692 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:38.692 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.692 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.692 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.692 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.692 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.692 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.692 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.692 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.692 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.692 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.692 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.692 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.692 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.692 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.692 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36571268 kB' 'MemAvailable: 41257612 kB' 'Buffers: 2696 kB' 'Cached: 18385000 kB' 'SwapCached: 0 kB' 'Active: 14388512 kB' 'Inactive: 4470784 kB' 'Active(anon): 13799352 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474420 kB' 'Mapped: 223080 kB' 'Shmem: 13327752 kB' 'KReclaimable: 240044 kB' 'Slab: 631136 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391092 kB' 'KernelStack: 12912 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14945488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.692 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.693 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36571020 kB' 'MemAvailable: 41257364 kB' 'Buffers: 2696 kB' 'Cached: 18385004 kB' 'SwapCached: 0 kB' 'Active: 14387996 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798836 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474356 kB' 'Mapped: 223156 kB' 'Shmem: 13327756 kB' 'KReclaimable: 240044 kB' 'Slab: 631184 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391140 kB' 'KernelStack: 12928 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14945504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.694 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.695 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.957 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36571876 kB' 'MemAvailable: 41258220 kB' 'Buffers: 2696 kB' 'Cached: 18385020 kB' 'SwapCached: 0 kB' 'Active: 14388040 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798880 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474396 kB' 'Mapped: 223080 kB' 'Shmem: 13327772 kB' 'KReclaimable: 240044 kB' 'Slab: 631176 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391132 kB' 'KernelStack: 12960 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14945160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.958 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.959 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:38.960 nr_hugepages=1536 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.960 resv_hugepages=0 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.960 surplus_hugepages=0 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.960 anon_hugepages=0 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36574516 kB' 'MemAvailable: 41260860 kB' 'Buffers: 2696 kB' 'Cached: 18385040 kB' 'SwapCached: 0 kB' 'Active: 14387728 kB' 'Inactive: 4470784 kB' 'Active(anon): 13798568 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474080 kB' 'Mapped: 223080 kB' 'Shmem: 13327792 kB' 'KReclaimable: 240044 kB' 'Slab: 631176 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391132 kB' 'KernelStack: 12880 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14945184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.960 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.961 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.962 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.962 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:38.962 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.962 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.962 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.962 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22459632 kB' 'MemUsed: 10370252 kB' 'SwapCached: 0 kB' 'Active: 8124344 kB' 'Inactive: 187208 kB' 'Active(anon): 7728188 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8097832 kB' 'Mapped: 118336 kB' 'AnonPages: 216824 kB' 'Shmem: 7514468 kB' 'KernelStack: 7880 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115400 kB' 'Slab: 319388 kB' 'SReclaimable: 115400 kB' 'SUnreclaim: 203988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.963 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14115628 kB' 'MemUsed: 13596216 kB' 'SwapCached: 0 kB' 'Active: 6263248 kB' 'Inactive: 4283576 kB' 'Active(anon): 6070244 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10289948 kB' 'Mapped: 104744 kB' 'AnonPages: 257048 kB' 'Shmem: 5813368 kB' 'KernelStack: 5016 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124644 kB' 'Slab: 311788 kB' 'SReclaimable: 124644 kB' 'SUnreclaim: 187144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.964 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.965 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.966 node0=512 expecting 512 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:38.966 node1=1024 expecting 1024 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:38.966 00:02:38.966 real 0m1.636s 00:02:38.966 user 0m0.651s 00:02:38.966 sys 0m0.951s 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:38.966 04:03:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.966 ************************************ 00:02:38.966 END TEST custom_alloc 00:02:38.966 ************************************ 00:02:38.966 04:03:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:38.966 04:03:26 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:38.966 04:03:26 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:38.966 04:03:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:38.966 ************************************ 00:02:38.966 START TEST no_shrink_alloc 00:02:38.966 ************************************ 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.966 04:03:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.341 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.341 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.341 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.341 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.341 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.341 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.341 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.341 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.341 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.341 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.341 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.341 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.341 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.341 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.341 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.341 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.341 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37607796 kB' 'MemAvailable: 42294140 kB' 'Buffers: 2696 kB' 'Cached: 18385132 kB' 'SwapCached: 0 kB' 'Active: 14388744 kB' 'Inactive: 4470784 kB' 'Active(anon): 13799584 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474864 kB' 'Mapped: 223116 kB' 'Shmem: 13327884 kB' 'KReclaimable: 240044 kB' 'Slab: 631192 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391148 kB' 'KernelStack: 12896 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14945756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.341 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37610096 kB' 'MemAvailable: 42296440 kB' 'Buffers: 2696 kB' 'Cached: 18385152 kB' 'SwapCached: 0 kB' 'Active: 14388572 kB' 'Inactive: 4470784 kB' 'Active(anon): 13799412 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474704 kB' 'Mapped: 223196 kB' 'Shmem: 13327904 kB' 'KReclaimable: 240044 kB' 'Slab: 631232 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391188 kB' 'KernelStack: 12912 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14945772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.342 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.343 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.344 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37610128 kB' 'MemAvailable: 42296472 kB' 'Buffers: 2696 kB' 'Cached: 18385152 kB' 'SwapCached: 0 kB' 'Active: 14388552 kB' 'Inactive: 4470784 kB' 'Active(anon): 13799392 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474668 kB' 'Mapped: 223096 kB' 'Shmem: 13327904 kB' 'KReclaimable: 240044 kB' 'Slab: 631208 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391164 kB' 'KernelStack: 12944 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14945796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:40.609 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.610 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:40.611 nr_hugepages=1024 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.611 resv_hugepages=0 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.611 surplus_hugepages=0 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.611 anon_hugepages=0 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37614348 kB' 'MemAvailable: 42300692 kB' 'Buffers: 2696 kB' 'Cached: 18385156 kB' 'SwapCached: 0 kB' 'Active: 14388676 kB' 'Inactive: 4470784 kB' 'Active(anon): 13799516 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474788 kB' 'Mapped: 223096 kB' 'Shmem: 13327908 kB' 'KReclaimable: 240044 kB' 'Slab: 631208 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391164 kB' 'KernelStack: 12928 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14945800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.611 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.612 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21403076 kB' 'MemUsed: 11426808 kB' 'SwapCached: 0 kB' 'Active: 8125084 kB' 'Inactive: 187208 kB' 'Active(anon): 7728928 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8097924 kB' 'Mapped: 118352 kB' 'AnonPages: 217504 kB' 'Shmem: 7514560 kB' 'KernelStack: 7880 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115400 kB' 'Slab: 319400 kB' 'SReclaimable: 115400 kB' 'SUnreclaim: 204000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.613 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.614 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:40.615 node0=1024 expecting 1024 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.615 04:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.994 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.994 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.994 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.994 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.994 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.994 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.994 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.994 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.994 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.994 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.994 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.994 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.994 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.994 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.994 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.994 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.994 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.994 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37586380 kB' 'MemAvailable: 42272724 kB' 'Buffers: 2696 kB' 'Cached: 18385240 kB' 'SwapCached: 0 kB' 'Active: 14394100 kB' 'Inactive: 4470784 kB' 'Active(anon): 13804940 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480072 kB' 'Mapped: 223540 kB' 'Shmem: 13327992 kB' 'KReclaimable: 240044 kB' 'Slab: 631432 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391388 kB' 'KernelStack: 12944 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14952236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199008 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.994 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.995 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37586648 kB' 'MemAvailable: 42272992 kB' 'Buffers: 2696 kB' 'Cached: 18385240 kB' 'SwapCached: 0 kB' 'Active: 14394904 kB' 'Inactive: 4470784 kB' 'Active(anon): 13805744 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480888 kB' 'Mapped: 223964 kB' 'Shmem: 13327992 kB' 'KReclaimable: 240044 kB' 'Slab: 631432 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391388 kB' 'KernelStack: 12944 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14952256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.996 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.997 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37587168 kB' 'MemAvailable: 42273512 kB' 'Buffers: 2696 kB' 'Cached: 18385240 kB' 'SwapCached: 0 kB' 'Active: 14391324 kB' 'Inactive: 4470784 kB' 'Active(anon): 13802164 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477348 kB' 'Mapped: 223964 kB' 'Shmem: 13327992 kB' 'KReclaimable: 240044 kB' 'Slab: 631456 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391412 kB' 'KernelStack: 12944 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14948832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.998 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.999 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:42.000 nr_hugepages=1024 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.000 resv_hugepages=0 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.000 surplus_hugepages=0 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.000 anon_hugepages=0 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.000 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37583388 kB' 'MemAvailable: 42269732 kB' 'Buffers: 2696 kB' 'Cached: 18385284 kB' 'SwapCached: 0 kB' 'Active: 14394048 kB' 'Inactive: 4470784 kB' 'Active(anon): 13804888 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480048 kB' 'Mapped: 223956 kB' 'Shmem: 13328036 kB' 'KReclaimable: 240044 kB' 'Slab: 631444 kB' 'SReclaimable: 240044 kB' 'SUnreclaim: 391400 kB' 'KernelStack: 12960 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14952300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198960 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2791004 kB' 'DirectMap2M: 19148800 kB' 'DirectMap1G: 47185920 kB' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.001 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.002 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21384768 kB' 'MemUsed: 11445116 kB' 'SwapCached: 0 kB' 'Active: 8124348 kB' 'Inactive: 187208 kB' 'Active(anon): 7728192 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8098000 kB' 'Mapped: 118512 kB' 'AnonPages: 216712 kB' 'Shmem: 7514636 kB' 'KernelStack: 7928 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115400 kB' 'Slab: 319576 kB' 'SReclaimable: 115400 kB' 'SUnreclaim: 204176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.003 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.004 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.005 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:42.006 node0=1024 expecting 1024 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:42.006 00:02:42.006 real 0m3.114s 00:02:42.006 user 0m1.278s 00:02:42.006 sys 0m1.771s 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:42.006 04:03:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:42.006 ************************************ 00:02:42.006 END TEST no_shrink_alloc 00:02:42.006 ************************************ 00:02:42.006 04:03:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:42.006 04:03:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:42.006 04:03:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:42.006 04:03:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.006 04:03:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.006 04:03:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.006 04:03:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.264 04:03:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:42.264 04:03:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.264 04:03:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.264 04:03:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.264 04:03:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.264 04:03:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:42.264 04:03:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:42.264 00:02:42.264 real 0m12.292s 00:02:42.264 user 0m4.730s 00:02:42.264 sys 0m6.372s 00:02:42.264 04:03:30 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:42.264 04:03:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:42.264 ************************************ 00:02:42.264 END TEST hugepages 00:02:42.264 ************************************ 00:02:42.264 04:03:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:42.264 04:03:30 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:42.264 04:03:30 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:42.264 04:03:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:42.264 ************************************ 00:02:42.264 START TEST driver 00:02:42.264 ************************************ 00:02:42.264 04:03:30 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:42.264 * Looking for test storage... 00:02:42.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:42.264 04:03:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:42.264 04:03:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.264 04:03:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.798 04:03:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:44.798 04:03:32 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:44.798 04:03:32 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:44.798 04:03:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:44.798 ************************************ 00:02:44.798 START TEST guess_driver 00:02:44.798 ************************************ 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:44.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:44.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:44.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:44.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:44.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:44.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:44.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:44.798 Looking for driver=vfio-pci 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.798 04:03:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.171 04:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.107 04:03:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.107 04:03:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.107 04:03:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.107 04:03:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:47.107 04:03:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:47.107 04:03:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.107 04:03:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.388 00:02:50.388 real 0m5.162s 00:02:50.388 user 0m1.253s 00:02:50.388 sys 0m2.052s 00:02:50.388 04:03:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:50.388 04:03:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:50.388 ************************************ 00:02:50.388 END TEST guess_driver 00:02:50.388 ************************************ 00:02:50.388 00:02:50.388 real 0m7.693s 00:02:50.388 user 0m1.859s 00:02:50.388 sys 0m3.139s 00:02:50.388 04:03:37 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:50.388 04:03:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:50.388 ************************************ 00:02:50.388 END TEST driver 00:02:50.388 ************************************ 00:02:50.388 04:03:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:50.388 04:03:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:50.388 04:03:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:50.388 04:03:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.388 ************************************ 00:02:50.388 START TEST devices 00:02:50.388 ************************************ 00:02:50.388 04:03:37 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:50.388 * Looking for test storage... 00:02:50.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.388 04:03:37 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:50.388 04:03:37 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:50.388 04:03:37 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.388 04:03:37 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:51.760 04:03:39 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:51.760 No valid GPT data, bailing 00:02:51.760 04:03:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:51.760 04:03:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:51.760 04:03:39 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:51.760 04:03:39 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:51.760 04:03:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:51.760 ************************************ 00:02:51.760 START TEST nvme_mount 00:02:51.760 ************************************ 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:51.760 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:51.761 04:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:52.693 Creating new GPT entries in memory. 00:02:52.693 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:52.693 other utilities. 00:02:52.693 04:03:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:52.693 04:03:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:52.693 04:03:40 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:52.693 04:03:40 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:52.693 04:03:40 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:53.625 Creating new GPT entries in memory. 00:02:53.625 The operation has completed successfully. 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3232626 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.625 04:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.995 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:54.996 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.996 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.996 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:54.996 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:54.996 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:54.996 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:54.996 04:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:55.253 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:55.253 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:55.253 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:55.253 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:55.253 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:55.253 04:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:55.253 04:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.253 04:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:55.253 04:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:55.253 04:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.254 04:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:56.646 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:56.647 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.647 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:56.647 04:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:56.647 04:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.647 04:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.017 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.018 04:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:58.276 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:58.276 00:02:58.276 real 0m6.713s 00:02:58.276 user 0m1.684s 00:02:58.276 sys 0m2.639s 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:58.276 04:03:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:02:58.276 ************************************ 00:02:58.276 END TEST nvme_mount 00:02:58.276 ************************************ 00:02:58.276 04:03:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:58.276 04:03:46 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:58.276 04:03:46 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:58.276 04:03:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:58.276 ************************************ 00:02:58.276 START TEST dm_mount 00:02:58.276 ************************************ 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:58.277 04:03:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:02:59.651 Creating new GPT entries in memory. 00:02:59.651 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:59.651 other utilities. 00:02:59.651 04:03:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:59.651 04:03:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:59.651 04:03:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:59.651 04:03:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:59.651 04:03:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:00.283 Creating new GPT entries in memory. 00:03:00.283 The operation has completed successfully. 00:03:00.283 04:03:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:00.283 04:03:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:00.283 04:03:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:00.557 04:03:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:00.557 04:03:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:01.492 The operation has completed successfully. 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3235308 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:01.492 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.493 04:03:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.866 04:03:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.240 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:04.241 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:04.241 00:03:04.241 real 0m6.005s 00:03:04.241 user 0m1.149s 00:03:04.241 sys 0m1.744s 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.241 04:03:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:04.241 ************************************ 00:03:04.241 END TEST dm_mount 00:03:04.241 ************************************ 00:03:04.241 04:03:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:04.241 04:03:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:04.241 04:03:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.241 04:03:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.241 04:03:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:04.241 04:03:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.241 04:03:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:04.498 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:04.498 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:04.498 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:04.498 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:04.498 04:03:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:04.498 04:03:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:04.498 04:03:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:04.498 04:03:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.498 04:03:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:04.498 04:03:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.755 04:03:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:04.755 00:03:04.755 real 0m14.718s 00:03:04.755 user 0m3.502s 00:03:04.755 sys 0m5.483s 00:03:04.755 04:03:52 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.755 04:03:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:04.755 ************************************ 00:03:04.755 END TEST devices 00:03:04.755 ************************************ 00:03:04.755 00:03:04.755 real 0m46.151s 00:03:04.755 user 0m13.683s 00:03:04.755 sys 0m21.018s 00:03:04.755 04:03:52 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.755 04:03:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.755 ************************************ 00:03:04.755 END TEST setup.sh 00:03:04.755 ************************************ 00:03:04.755 04:03:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.126 Hugepages 00:03:06.126 node hugesize free / total 00:03:06.126 node0 1048576kB 0 / 0 00:03:06.126 node0 2048kB 2048 / 2048 00:03:06.126 node1 1048576kB 0 / 0 00:03:06.126 node1 2048kB 0 / 0 00:03:06.126 00:03:06.126 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.126 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:06.126 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:06.126 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:06.126 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:06.126 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:06.126 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:06.127 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:06.127 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:06.127 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:06.127 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:06.127 04:03:54 -- spdk/autotest.sh@130 -- # uname -s 00:03:06.127 04:03:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:06.127 04:03:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:06.127 04:03:54 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.497 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.497 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.497 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.497 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.497 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.497 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.497 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.497 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.497 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.429 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.686 04:03:56 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:09.618 04:03:57 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:09.618 04:03:57 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:09.618 04:03:57 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:09.618 04:03:57 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:09.618 04:03:57 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:09.618 04:03:57 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:09.618 04:03:57 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:09.618 04:03:57 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:09.618 04:03:57 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:09.618 04:03:57 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:09.618 04:03:57 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:09.618 04:03:57 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.993 Waiting for block devices as requested 00:03:10.993 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:10.993 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:11.250 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:11.250 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:11.250 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:11.250 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:11.509 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:11.509 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:11.509 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:11.509 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:11.767 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:11.767 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:11.767 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:11.767 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:12.025 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:12.025 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:12.025 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:12.283 04:04:00 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:12.283 04:04:00 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:03:12.283 04:04:00 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:12.283 04:04:00 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:12.283 04:04:00 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:12.283 04:04:00 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:12.283 04:04:00 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:12.283 04:04:00 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:12.283 04:04:00 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:12.283 04:04:00 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:12.283 04:04:00 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:12.283 04:04:00 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:12.283 04:04:00 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:12.283 04:04:00 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:12.283 04:04:00 -- common/autotest_common.sh@1553 -- # continue 00:03:12.283 04:04:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:12.283 04:04:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.283 04:04:00 -- common/autotest_common.sh@10 -- # set +x 00:03:12.283 04:04:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:12.283 04:04:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:12.283 04:04:00 -- common/autotest_common.sh@10 -- # set +x 00:03:12.283 04:04:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.654 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.654 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.654 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.654 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.654 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.654 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.654 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.654 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.654 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.587 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.587 04:04:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:14.587 04:04:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:14.587 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:14.587 04:04:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:14.587 04:04:02 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:14.587 04:04:02 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:14.587 04:04:02 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:14.587 04:04:02 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:14.587 04:04:02 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:14.587 04:04:02 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:14.587 04:04:02 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:14.587 04:04:02 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:14.587 04:04:02 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:14.587 04:04:02 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:14.845 04:04:02 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:14.845 04:04:02 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:14.845 04:04:02 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:14.845 04:04:02 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:14.845 04:04:02 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:14.845 04:04:02 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:14.845 04:04:02 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:14.845 04:04:02 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:03:14.845 04:04:02 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:03:14.845 04:04:02 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3241195 00:03:14.845 04:04:02 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.845 04:04:02 -- common/autotest_common.sh@1594 -- # waitforlisten 3241195 00:03:14.845 04:04:02 -- common/autotest_common.sh@827 -- # '[' -z 3241195 ']' 00:03:14.845 04:04:02 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:14.845 04:04:02 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:14.845 04:04:02 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:14.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:14.845 04:04:02 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:14.845 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:14.845 [2024-05-15 04:04:02.687645] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:14.845 [2024-05-15 04:04:02.687742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241195 ] 00:03:14.845 EAL: No free 2048 kB hugepages reported on node 1 00:03:14.845 [2024-05-15 04:04:02.756584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.102 [2024-05-15 04:04:02.867566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.359 04:04:03 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:15.359 04:04:03 -- common/autotest_common.sh@860 -- # return 0 00:03:15.359 04:04:03 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:15.359 04:04:03 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:15.359 04:04:03 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:18.704 nvme0n1 00:03:18.704 04:04:06 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:18.704 [2024-05-15 04:04:06.454798] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:18.704 [2024-05-15 04:04:06.454840] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:18.704 request: 00:03:18.704 { 00:03:18.704 "nvme_ctrlr_name": "nvme0", 00:03:18.704 "password": "test", 00:03:18.704 "method": "bdev_nvme_opal_revert", 00:03:18.704 "req_id": 1 00:03:18.704 } 00:03:18.704 Got JSON-RPC error response 00:03:18.704 response: 00:03:18.704 { 00:03:18.704 "code": -32603, 00:03:18.704 "message": "Internal error" 00:03:18.704 } 00:03:18.704 04:04:06 -- common/autotest_common.sh@1600 -- # true 00:03:18.704 04:04:06 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:18.704 04:04:06 -- common/autotest_common.sh@1604 -- # killprocess 3241195 00:03:18.704 04:04:06 -- common/autotest_common.sh@946 -- # '[' -z 3241195 ']' 00:03:18.704 04:04:06 -- common/autotest_common.sh@950 -- # kill -0 3241195 00:03:18.704 04:04:06 -- common/autotest_common.sh@951 -- # uname 00:03:18.704 04:04:06 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:18.704 04:04:06 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3241195 00:03:18.704 04:04:06 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:18.704 04:04:06 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:18.704 04:04:06 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3241195' 00:03:18.704 killing process with pid 3241195 00:03:18.704 04:04:06 -- common/autotest_common.sh@965 -- # kill 3241195 00:03:18.704 04:04:06 -- common/autotest_common.sh@970 -- # wait 3241195 00:03:20.601 04:04:08 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:20.601 04:04:08 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:20.601 04:04:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:20.601 04:04:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:20.601 04:04:08 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:20.601 04:04:08 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:20.601 04:04:08 -- common/autotest_common.sh@10 -- # set +x 00:03:20.601 04:04:08 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.601 04:04:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:20.601 04:04:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:20.601 04:04:08 -- common/autotest_common.sh@10 -- # set +x 00:03:20.601 ************************************ 00:03:20.601 START TEST env 00:03:20.601 ************************************ 00:03:20.601 04:04:08 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.601 * Looking for test storage... 00:03:20.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:20.601 04:04:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.601 04:04:08 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:20.601 04:04:08 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:20.601 04:04:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.601 ************************************ 00:03:20.601 START TEST env_memory 00:03:20.601 ************************************ 00:03:20.601 04:04:08 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.601 00:03:20.601 00:03:20.601 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.601 http://cunit.sourceforge.net/ 00:03:20.601 00:03:20.601 00:03:20.601 Suite: memory 00:03:20.601 Test: alloc and free memory map ...[2024-05-15 04:04:08.434495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:20.601 passed 00:03:20.601 Test: mem map translation ...[2024-05-15 04:04:08.455041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:20.601 [2024-05-15 04:04:08.455063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:20.601 [2024-05-15 04:04:08.455118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:20.601 [2024-05-15 04:04:08.455130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:20.601 passed 00:03:20.601 Test: mem map registration ...[2024-05-15 04:04:08.495771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:20.601 [2024-05-15 04:04:08.495791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:20.601 passed 00:03:20.601 Test: mem map adjacent registrations ...passed 00:03:20.601 00:03:20.601 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.601 suites 1 1 n/a 0 0 00:03:20.601 tests 4 4 4 0 0 00:03:20.601 asserts 152 152 152 0 n/a 00:03:20.601 00:03:20.601 Elapsed time = 0.144 seconds 00:03:20.601 00:03:20.601 real 0m0.151s 00:03:20.601 user 0m0.145s 00:03:20.601 sys 0m0.006s 00:03:20.601 04:04:08 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:20.601 04:04:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:20.601 ************************************ 00:03:20.601 END TEST env_memory 00:03:20.601 ************************************ 00:03:20.601 04:04:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.601 04:04:08 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:20.601 04:04:08 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:20.601 04:04:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.601 ************************************ 00:03:20.601 START TEST env_vtophys 00:03:20.601 ************************************ 00:03:20.601 04:04:08 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.601 EAL: lib.eal log level changed from notice to debug 00:03:20.601 EAL: Detected lcore 0 as core 0 on socket 0 00:03:20.601 EAL: Detected lcore 1 as core 1 on socket 0 00:03:20.601 EAL: Detected lcore 2 as core 2 on socket 0 00:03:20.601 EAL: Detected lcore 3 as core 3 on socket 0 00:03:20.601 EAL: Detected lcore 4 as core 4 on socket 0 00:03:20.601 EAL: Detected lcore 5 as core 5 on socket 0 00:03:20.601 EAL: Detected lcore 6 as core 8 on socket 0 00:03:20.601 EAL: Detected lcore 7 as core 9 on socket 0 00:03:20.601 EAL: Detected lcore 8 as core 10 on socket 0 00:03:20.601 EAL: Detected lcore 9 as core 11 on socket 0 00:03:20.601 EAL: Detected lcore 10 as core 12 on socket 0 00:03:20.601 EAL: Detected lcore 11 as core 13 on socket 0 00:03:20.601 EAL: Detected lcore 12 as core 0 on socket 1 00:03:20.601 EAL: Detected lcore 13 as core 1 on socket 1 00:03:20.601 EAL: Detected lcore 14 as core 2 on socket 1 00:03:20.601 EAL: Detected lcore 15 as core 3 on socket 1 00:03:20.601 EAL: Detected lcore 16 as core 4 on socket 1 00:03:20.601 EAL: Detected lcore 17 as core 5 on socket 1 00:03:20.601 EAL: Detected lcore 18 as core 8 on socket 1 00:03:20.601 EAL: Detected lcore 19 as core 9 on socket 1 00:03:20.601 EAL: Detected lcore 20 as core 10 on socket 1 00:03:20.601 EAL: Detected lcore 21 as core 11 on socket 1 00:03:20.601 EAL: Detected lcore 22 as core 12 on socket 1 00:03:20.601 EAL: Detected lcore 23 as core 13 on socket 1 00:03:20.601 EAL: Detected lcore 24 as core 0 on socket 0 00:03:20.601 EAL: Detected lcore 25 as core 1 on socket 0 00:03:20.601 EAL: Detected lcore 26 as core 2 on socket 0 00:03:20.601 EAL: Detected lcore 27 as core 3 on socket 0 00:03:20.601 EAL: Detected lcore 28 as core 4 on socket 0 00:03:20.601 EAL: Detected lcore 29 as core 5 on socket 0 00:03:20.601 EAL: Detected lcore 30 as core 8 on socket 0 00:03:20.601 EAL: Detected lcore 31 as core 9 on socket 0 00:03:20.601 EAL: Detected lcore 32 as core 10 on socket 0 00:03:20.601 EAL: Detected lcore 33 as core 11 on socket 0 00:03:20.601 EAL: Detected lcore 34 as core 12 on socket 0 00:03:20.601 EAL: Detected lcore 35 as core 13 on socket 0 00:03:20.601 EAL: Detected lcore 36 as core 0 on socket 1 00:03:20.601 EAL: Detected lcore 37 as core 1 on socket 1 00:03:20.601 EAL: Detected lcore 38 as core 2 on socket 1 00:03:20.601 EAL: Detected lcore 39 as core 3 on socket 1 00:03:20.601 EAL: Detected lcore 40 as core 4 on socket 1 00:03:20.601 EAL: Detected lcore 41 as core 5 on socket 1 00:03:20.601 EAL: Detected lcore 42 as core 8 on socket 1 00:03:20.601 EAL: Detected lcore 43 as core 9 on socket 1 00:03:20.601 EAL: Detected lcore 44 as core 10 on socket 1 00:03:20.601 EAL: Detected lcore 45 as core 11 on socket 1 00:03:20.601 EAL: Detected lcore 46 as core 12 on socket 1 00:03:20.601 EAL: Detected lcore 47 as core 13 on socket 1 00:03:20.860 EAL: Maximum logical cores by configuration: 128 00:03:20.860 EAL: Detected CPU lcores: 48 00:03:20.860 EAL: Detected NUMA nodes: 2 00:03:20.860 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:20.860 EAL: Detected shared linkage of DPDK 00:03:20.860 EAL: No shared files mode enabled, IPC will be disabled 00:03:20.860 EAL: Bus pci wants IOVA as 'DC' 00:03:20.860 EAL: Buses did not request a specific IOVA mode. 00:03:20.860 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:20.860 EAL: Selected IOVA mode 'VA' 00:03:20.860 EAL: No free 2048 kB hugepages reported on node 1 00:03:20.860 EAL: Probing VFIO support... 00:03:20.860 EAL: IOMMU type 1 (Type 1) is supported 00:03:20.860 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:20.860 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:20.860 EAL: VFIO support initialized 00:03:20.860 EAL: Ask a virtual area of 0x2e000 bytes 00:03:20.860 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:20.860 EAL: Setting up physically contiguous memory... 00:03:20.860 EAL: Setting maximum number of open files to 524288 00:03:20.860 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:20.860 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:20.860 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:20.860 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:20.860 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:20.860 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:20.860 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:20.860 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:20.860 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:20.860 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:20.860 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:20.860 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.860 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:20.860 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.860 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.860 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:20.861 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:20.861 EAL: Hugepages will be freed exactly as allocated. 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: TSC frequency is ~2700000 KHz 00:03:20.861 EAL: Main lcore 0 is ready (tid=7fe9548cda00;cpuset=[0]) 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 0 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 2MB 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:20.861 EAL: Mem event callback 'spdk:(nil)' registered 00:03:20.861 00:03:20.861 00:03:20.861 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.861 http://cunit.sourceforge.net/ 00:03:20.861 00:03:20.861 00:03:20.861 Suite: components_suite 00:03:20.861 Test: vtophys_malloc_test ...passed 00:03:20.861 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 4 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 4MB 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was shrunk by 4MB 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 4 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 6MB 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was shrunk by 6MB 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 4 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 10MB 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was shrunk by 10MB 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 4 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 18MB 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was shrunk by 18MB 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 4 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 34MB 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was shrunk by 34MB 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 4 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 66MB 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was shrunk by 66MB 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.861 EAL: Restoring previous memory policy: 4 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was expanded by 130MB 00:03:20.861 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.861 EAL: request: mp_malloc_sync 00:03:20.861 EAL: No shared files mode enabled, IPC is disabled 00:03:20.861 EAL: Heap on socket 0 was shrunk by 130MB 00:03:20.861 EAL: Trying to obtain current memory policy. 00:03:20.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.119 EAL: Restoring previous memory policy: 4 00:03:21.119 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.119 EAL: request: mp_malloc_sync 00:03:21.119 EAL: No shared files mode enabled, IPC is disabled 00:03:21.119 EAL: Heap on socket 0 was expanded by 258MB 00:03:21.119 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.119 EAL: request: mp_malloc_sync 00:03:21.119 EAL: No shared files mode enabled, IPC is disabled 00:03:21.119 EAL: Heap on socket 0 was shrunk by 258MB 00:03:21.119 EAL: Trying to obtain current memory policy. 00:03:21.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.378 EAL: Restoring previous memory policy: 4 00:03:21.378 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.378 EAL: request: mp_malloc_sync 00:03:21.378 EAL: No shared files mode enabled, IPC is disabled 00:03:21.378 EAL: Heap on socket 0 was expanded by 514MB 00:03:21.378 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.378 EAL: request: mp_malloc_sync 00:03:21.378 EAL: No shared files mode enabled, IPC is disabled 00:03:21.378 EAL: Heap on socket 0 was shrunk by 514MB 00:03:21.378 EAL: Trying to obtain current memory policy. 00:03:21.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.635 EAL: Restoring previous memory policy: 4 00:03:21.635 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.635 EAL: request: mp_malloc_sync 00:03:21.635 EAL: No shared files mode enabled, IPC is disabled 00:03:21.635 EAL: Heap on socket 0 was expanded by 1026MB 00:03:21.892 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.149 EAL: request: mp_malloc_sync 00:03:22.149 EAL: No shared files mode enabled, IPC is disabled 00:03:22.149 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:22.149 passed 00:03:22.149 00:03:22.149 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.149 suites 1 1 n/a 0 0 00:03:22.149 tests 2 2 2 0 0 00:03:22.149 asserts 497 497 497 0 n/a 00:03:22.149 00:03:22.149 Elapsed time = 1.353 seconds 00:03:22.149 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.149 EAL: request: mp_malloc_sync 00:03:22.149 EAL: No shared files mode enabled, IPC is disabled 00:03:22.149 EAL: Heap on socket 0 was shrunk by 2MB 00:03:22.149 EAL: No shared files mode enabled, IPC is disabled 00:03:22.149 EAL: No shared files mode enabled, IPC is disabled 00:03:22.149 EAL: No shared files mode enabled, IPC is disabled 00:03:22.149 00:03:22.149 real 0m1.477s 00:03:22.149 user 0m0.839s 00:03:22.149 sys 0m0.607s 00:03:22.149 04:04:10 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.149 04:04:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:22.149 ************************************ 00:03:22.149 END TEST env_vtophys 00:03:22.149 ************************************ 00:03:22.149 04:04:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.149 04:04:10 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.149 04:04:10 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.149 04:04:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.149 ************************************ 00:03:22.149 START TEST env_pci 00:03:22.149 ************************************ 00:03:22.149 04:04:10 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.149 00:03:22.149 00:03:22.149 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.149 http://cunit.sourceforge.net/ 00:03:22.149 00:03:22.149 00:03:22.149 Suite: pci 00:03:22.149 Test: pci_hook ...[2024-05-15 04:04:10.136395] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3242089 has claimed it 00:03:22.408 EAL: Cannot find device (10000:00:01.0) 00:03:22.408 EAL: Failed to attach device on primary process 00:03:22.409 passed 00:03:22.409 00:03:22.409 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.409 suites 1 1 n/a 0 0 00:03:22.409 tests 1 1 1 0 0 00:03:22.409 asserts 25 25 25 0 n/a 00:03:22.409 00:03:22.409 Elapsed time = 0.027 seconds 00:03:22.409 00:03:22.409 real 0m0.039s 00:03:22.409 user 0m0.014s 00:03:22.409 sys 0m0.025s 00:03:22.409 04:04:10 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.409 04:04:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:22.409 ************************************ 00:03:22.409 END TEST env_pci 00:03:22.409 ************************************ 00:03:22.409 04:04:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:22.409 04:04:10 env -- env/env.sh@15 -- # uname 00:03:22.409 04:04:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:22.409 04:04:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:22.409 04:04:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.409 04:04:10 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:03:22.409 04:04:10 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.409 04:04:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.409 ************************************ 00:03:22.409 START TEST env_dpdk_post_init 00:03:22.409 ************************************ 00:03:22.409 04:04:10 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.409 EAL: Detected CPU lcores: 48 00:03:22.409 EAL: Detected NUMA nodes: 2 00:03:22.409 EAL: Detected shared linkage of DPDK 00:03:22.409 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.409 EAL: Selected IOVA mode 'VA' 00:03:22.409 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.409 EAL: VFIO support initialized 00:03:22.409 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.409 EAL: Using IOMMU type 1 (Type 1) 00:03:22.409 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:22.409 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:22.409 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:22.409 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:22.409 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:22.409 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:22.669 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:23.606 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:26.886 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:26.886 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:26.886 Starting DPDK initialization... 00:03:26.886 Starting SPDK post initialization... 00:03:26.886 SPDK NVMe probe 00:03:26.886 Attaching to 0000:88:00.0 00:03:26.886 Attached to 0000:88:00.0 00:03:26.886 Cleaning up... 00:03:26.886 00:03:26.886 real 0m4.422s 00:03:26.886 user 0m3.255s 00:03:26.886 sys 0m0.226s 00:03:26.886 04:04:14 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.886 04:04:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:26.886 ************************************ 00:03:26.886 END TEST env_dpdk_post_init 00:03:26.886 ************************************ 00:03:26.886 04:04:14 env -- env/env.sh@26 -- # uname 00:03:26.886 04:04:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:26.886 04:04:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.886 04:04:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:26.886 04:04:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.886 04:04:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.886 ************************************ 00:03:26.886 START TEST env_mem_callbacks 00:03:26.886 ************************************ 00:03:26.886 04:04:14 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.886 EAL: Detected CPU lcores: 48 00:03:26.886 EAL: Detected NUMA nodes: 2 00:03:26.886 EAL: Detected shared linkage of DPDK 00:03:26.886 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.886 EAL: Selected IOVA mode 'VA' 00:03:26.886 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.886 EAL: VFIO support initialized 00:03:26.886 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.886 00:03:26.886 00:03:26.886 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.886 http://cunit.sourceforge.net/ 00:03:26.886 00:03:26.886 00:03:26.886 Suite: memory 00:03:26.886 Test: test ... 00:03:26.886 register 0x200000200000 2097152 00:03:26.886 malloc 3145728 00:03:26.886 register 0x200000400000 4194304 00:03:26.886 buf 0x200000500000 len 3145728 PASSED 00:03:26.886 malloc 64 00:03:26.886 buf 0x2000004fff40 len 64 PASSED 00:03:26.886 malloc 4194304 00:03:26.886 register 0x200000800000 6291456 00:03:26.886 buf 0x200000a00000 len 4194304 PASSED 00:03:26.886 free 0x200000500000 3145728 00:03:26.886 free 0x2000004fff40 64 00:03:26.886 unregister 0x200000400000 4194304 PASSED 00:03:26.886 free 0x200000a00000 4194304 00:03:26.886 unregister 0x200000800000 6291456 PASSED 00:03:26.886 malloc 8388608 00:03:26.886 register 0x200000400000 10485760 00:03:26.886 buf 0x200000600000 len 8388608 PASSED 00:03:26.886 free 0x200000600000 8388608 00:03:26.886 unregister 0x200000400000 10485760 PASSED 00:03:26.886 passed 00:03:26.886 00:03:26.886 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.886 suites 1 1 n/a 0 0 00:03:26.886 tests 1 1 1 0 0 00:03:26.886 asserts 15 15 15 0 n/a 00:03:26.886 00:03:26.886 Elapsed time = 0.005 seconds 00:03:26.886 00:03:26.886 real 0m0.054s 00:03:26.886 user 0m0.017s 00:03:26.886 sys 0m0.036s 00:03:26.886 04:04:14 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.886 04:04:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:26.886 ************************************ 00:03:26.886 END TEST env_mem_callbacks 00:03:26.886 ************************************ 00:03:26.886 00:03:26.886 real 0m6.443s 00:03:26.886 user 0m4.390s 00:03:26.886 sys 0m1.087s 00:03:26.886 04:04:14 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.886 04:04:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.886 ************************************ 00:03:26.886 END TEST env 00:03:26.886 ************************************ 00:03:26.886 04:04:14 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.887 04:04:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:26.887 04:04:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.887 04:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:26.887 ************************************ 00:03:26.887 START TEST rpc 00:03:26.887 ************************************ 00:03:26.887 04:04:14 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.887 * Looking for test storage... 00:03:26.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:26.887 04:04:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3242835 00:03:26.887 04:04:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:26.887 04:04:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:26.887 04:04:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3242835 00:03:26.887 04:04:14 rpc -- common/autotest_common.sh@827 -- # '[' -z 3242835 ']' 00:03:26.887 04:04:14 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.887 04:04:14 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:26.887 04:04:14 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.887 04:04:14 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:26.887 04:04:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.145 [2024-05-15 04:04:14.915675] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:27.145 [2024-05-15 04:04:14.915762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242835 ] 00:03:27.145 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.145 [2024-05-15 04:04:14.981364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.145 [2024-05-15 04:04:15.087503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:27.145 [2024-05-15 04:04:15.087565] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3242835' to capture a snapshot of events at runtime. 00:03:27.145 [2024-05-15 04:04:15.087593] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:27.146 [2024-05-15 04:04:15.087605] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:27.146 [2024-05-15 04:04:15.087614] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3242835 for offline analysis/debug. 00:03:27.146 [2024-05-15 04:04:15.087651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.404 04:04:15 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:27.404 04:04:15 rpc -- common/autotest_common.sh@860 -- # return 0 00:03:27.404 04:04:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.404 04:04:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.404 04:04:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:27.404 04:04:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:27.404 04:04:15 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:27.404 04:04:15 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.404 04:04:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.404 ************************************ 00:03:27.404 START TEST rpc_integrity 00:03:27.404 ************************************ 00:03:27.404 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:27.404 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:27.404 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.404 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.404 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.404 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:27.404 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:27.662 { 00:03:27.662 "name": "Malloc0", 00:03:27.662 "aliases": [ 00:03:27.662 "3f0d1ccf-316c-4faf-bc6b-a64a2c52b961" 00:03:27.662 ], 00:03:27.662 "product_name": "Malloc disk", 00:03:27.662 "block_size": 512, 00:03:27.662 "num_blocks": 16384, 00:03:27.662 "uuid": "3f0d1ccf-316c-4faf-bc6b-a64a2c52b961", 00:03:27.662 "assigned_rate_limits": { 00:03:27.662 "rw_ios_per_sec": 0, 00:03:27.662 "rw_mbytes_per_sec": 0, 00:03:27.662 "r_mbytes_per_sec": 0, 00:03:27.662 "w_mbytes_per_sec": 0 00:03:27.662 }, 00:03:27.662 "claimed": false, 00:03:27.662 "zoned": false, 00:03:27.662 "supported_io_types": { 00:03:27.662 "read": true, 00:03:27.662 "write": true, 00:03:27.662 "unmap": true, 00:03:27.662 "write_zeroes": true, 00:03:27.662 "flush": true, 00:03:27.662 "reset": true, 00:03:27.662 "compare": false, 00:03:27.662 "compare_and_write": false, 00:03:27.662 "abort": true, 00:03:27.662 "nvme_admin": false, 00:03:27.662 "nvme_io": false 00:03:27.662 }, 00:03:27.662 "memory_domains": [ 00:03:27.662 { 00:03:27.662 "dma_device_id": "system", 00:03:27.662 "dma_device_type": 1 00:03:27.662 }, 00:03:27.662 { 00:03:27.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.662 "dma_device_type": 2 00:03:27.662 } 00:03:27.662 ], 00:03:27.662 "driver_specific": {} 00:03:27.662 } 00:03:27.662 ]' 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.662 [2024-05-15 04:04:15.496206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:27.662 [2024-05-15 04:04:15.496264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:27.662 [2024-05-15 04:04:15.496287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1735c10 00:03:27.662 [2024-05-15 04:04:15.496302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:27.662 [2024-05-15 04:04:15.497756] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:27.662 [2024-05-15 04:04:15.497784] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:27.662 Passthru0 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:27.662 { 00:03:27.662 "name": "Malloc0", 00:03:27.662 "aliases": [ 00:03:27.662 "3f0d1ccf-316c-4faf-bc6b-a64a2c52b961" 00:03:27.662 ], 00:03:27.662 "product_name": "Malloc disk", 00:03:27.662 "block_size": 512, 00:03:27.662 "num_blocks": 16384, 00:03:27.662 "uuid": "3f0d1ccf-316c-4faf-bc6b-a64a2c52b961", 00:03:27.662 "assigned_rate_limits": { 00:03:27.662 "rw_ios_per_sec": 0, 00:03:27.662 "rw_mbytes_per_sec": 0, 00:03:27.662 "r_mbytes_per_sec": 0, 00:03:27.662 "w_mbytes_per_sec": 0 00:03:27.662 }, 00:03:27.662 "claimed": true, 00:03:27.662 "claim_type": "exclusive_write", 00:03:27.662 "zoned": false, 00:03:27.662 "supported_io_types": { 00:03:27.662 "read": true, 00:03:27.662 "write": true, 00:03:27.662 "unmap": true, 00:03:27.662 "write_zeroes": true, 00:03:27.662 "flush": true, 00:03:27.662 "reset": true, 00:03:27.662 "compare": false, 00:03:27.662 "compare_and_write": false, 00:03:27.662 "abort": true, 00:03:27.662 "nvme_admin": false, 00:03:27.662 "nvme_io": false 00:03:27.662 }, 00:03:27.662 "memory_domains": [ 00:03:27.662 { 00:03:27.662 "dma_device_id": "system", 00:03:27.662 "dma_device_type": 1 00:03:27.662 }, 00:03:27.662 { 00:03:27.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.662 "dma_device_type": 2 00:03:27.662 } 00:03:27.662 ], 00:03:27.662 "driver_specific": {} 00:03:27.662 }, 00:03:27.662 { 00:03:27.662 "name": "Passthru0", 00:03:27.662 "aliases": [ 00:03:27.662 "f595e752-25da-522f-9a85-1c6fb4b1aa65" 00:03:27.662 ], 00:03:27.662 "product_name": "passthru", 00:03:27.662 "block_size": 512, 00:03:27.662 "num_blocks": 16384, 00:03:27.662 "uuid": "f595e752-25da-522f-9a85-1c6fb4b1aa65", 00:03:27.662 "assigned_rate_limits": { 00:03:27.662 "rw_ios_per_sec": 0, 00:03:27.662 "rw_mbytes_per_sec": 0, 00:03:27.662 "r_mbytes_per_sec": 0, 00:03:27.662 "w_mbytes_per_sec": 0 00:03:27.662 }, 00:03:27.662 "claimed": false, 00:03:27.662 "zoned": false, 00:03:27.662 "supported_io_types": { 00:03:27.662 "read": true, 00:03:27.662 "write": true, 00:03:27.662 "unmap": true, 00:03:27.662 "write_zeroes": true, 00:03:27.662 "flush": true, 00:03:27.662 "reset": true, 00:03:27.662 "compare": false, 00:03:27.662 "compare_and_write": false, 00:03:27.662 "abort": true, 00:03:27.662 "nvme_admin": false, 00:03:27.662 "nvme_io": false 00:03:27.662 }, 00:03:27.662 "memory_domains": [ 00:03:27.662 { 00:03:27.662 "dma_device_id": "system", 00:03:27.662 "dma_device_type": 1 00:03:27.662 }, 00:03:27.662 { 00:03:27.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.662 "dma_device_type": 2 00:03:27.662 } 00:03:27.662 ], 00:03:27.662 "driver_specific": { 00:03:27.662 "passthru": { 00:03:27.662 "name": "Passthru0", 00:03:27.662 "base_bdev_name": "Malloc0" 00:03:27.662 } 00:03:27.662 } 00:03:27.662 } 00:03:27.662 ]' 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:27.662 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.662 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.663 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.663 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.663 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:27.663 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:27.663 04:04:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:27.663 00:03:27.663 real 0m0.233s 00:03:27.663 user 0m0.155s 00:03:27.663 sys 0m0.021s 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.663 04:04:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.663 ************************************ 00:03:27.663 END TEST rpc_integrity 00:03:27.663 ************************************ 00:03:27.663 04:04:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:27.663 04:04:15 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:27.663 04:04:15 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.663 04:04:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.663 ************************************ 00:03:27.663 START TEST rpc_plugins 00:03:27.663 ************************************ 00:03:27.663 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:03:27.663 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:27.663 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.663 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:27.921 { 00:03:27.921 "name": "Malloc1", 00:03:27.921 "aliases": [ 00:03:27.921 "020cc294-2748-45f1-be14-ffe9a3798c28" 00:03:27.921 ], 00:03:27.921 "product_name": "Malloc disk", 00:03:27.921 "block_size": 4096, 00:03:27.921 "num_blocks": 256, 00:03:27.921 "uuid": "020cc294-2748-45f1-be14-ffe9a3798c28", 00:03:27.921 "assigned_rate_limits": { 00:03:27.921 "rw_ios_per_sec": 0, 00:03:27.921 "rw_mbytes_per_sec": 0, 00:03:27.921 "r_mbytes_per_sec": 0, 00:03:27.921 "w_mbytes_per_sec": 0 00:03:27.921 }, 00:03:27.921 "claimed": false, 00:03:27.921 "zoned": false, 00:03:27.921 "supported_io_types": { 00:03:27.921 "read": true, 00:03:27.921 "write": true, 00:03:27.921 "unmap": true, 00:03:27.921 "write_zeroes": true, 00:03:27.921 "flush": true, 00:03:27.921 "reset": true, 00:03:27.921 "compare": false, 00:03:27.921 "compare_and_write": false, 00:03:27.921 "abort": true, 00:03:27.921 "nvme_admin": false, 00:03:27.921 "nvme_io": false 00:03:27.921 }, 00:03:27.921 "memory_domains": [ 00:03:27.921 { 00:03:27.921 "dma_device_id": "system", 00:03:27.921 "dma_device_type": 1 00:03:27.921 }, 00:03:27.921 { 00:03:27.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.921 "dma_device_type": 2 00:03:27.921 } 00:03:27.921 ], 00:03:27.921 "driver_specific": {} 00:03:27.921 } 00:03:27.921 ]' 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:27.921 04:04:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:27.921 00:03:27.921 real 0m0.113s 00:03:27.921 user 0m0.073s 00:03:27.921 sys 0m0.011s 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.921 04:04:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 ************************************ 00:03:27.921 END TEST rpc_plugins 00:03:27.921 ************************************ 00:03:27.921 04:04:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:27.921 04:04:15 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:27.921 04:04:15 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.921 04:04:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 ************************************ 00:03:27.921 START TEST rpc_trace_cmd_test 00:03:27.921 ************************************ 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:27.921 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3242835", 00:03:27.921 "tpoint_group_mask": "0x8", 00:03:27.921 "iscsi_conn": { 00:03:27.921 "mask": "0x2", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "scsi": { 00:03:27.921 "mask": "0x4", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "bdev": { 00:03:27.921 "mask": "0x8", 00:03:27.921 "tpoint_mask": "0xffffffffffffffff" 00:03:27.921 }, 00:03:27.921 "nvmf_rdma": { 00:03:27.921 "mask": "0x10", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "nvmf_tcp": { 00:03:27.921 "mask": "0x20", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "ftl": { 00:03:27.921 "mask": "0x40", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "blobfs": { 00:03:27.921 "mask": "0x80", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "dsa": { 00:03:27.921 "mask": "0x200", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "thread": { 00:03:27.921 "mask": "0x400", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "nvme_pcie": { 00:03:27.921 "mask": "0x800", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "iaa": { 00:03:27.921 "mask": "0x1000", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "nvme_tcp": { 00:03:27.921 "mask": "0x2000", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "bdev_nvme": { 00:03:27.921 "mask": "0x4000", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 }, 00:03:27.921 "sock": { 00:03:27.921 "mask": "0x8000", 00:03:27.921 "tpoint_mask": "0x0" 00:03:27.921 } 00:03:27.921 }' 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:27.921 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:28.179 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:28.179 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:28.179 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:28.179 04:04:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:28.179 04:04:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:28.179 00:03:28.179 real 0m0.200s 00:03:28.179 user 0m0.178s 00:03:28.179 sys 0m0.015s 00:03:28.179 04:04:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:28.179 04:04:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:28.179 ************************************ 00:03:28.179 END TEST rpc_trace_cmd_test 00:03:28.179 ************************************ 00:03:28.179 04:04:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:28.179 04:04:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:28.179 04:04:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:28.179 04:04:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:28.179 04:04:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:28.179 04:04:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.179 ************************************ 00:03:28.179 START TEST rpc_daemon_integrity 00:03:28.179 ************************************ 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.179 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:28.179 { 00:03:28.179 "name": "Malloc2", 00:03:28.179 "aliases": [ 00:03:28.179 "d828c7c5-f802-428f-b624-3e2865d1dea5" 00:03:28.179 ], 00:03:28.179 "product_name": "Malloc disk", 00:03:28.179 "block_size": 512, 00:03:28.179 "num_blocks": 16384, 00:03:28.179 "uuid": "d828c7c5-f802-428f-b624-3e2865d1dea5", 00:03:28.179 "assigned_rate_limits": { 00:03:28.179 "rw_ios_per_sec": 0, 00:03:28.179 "rw_mbytes_per_sec": 0, 00:03:28.179 "r_mbytes_per_sec": 0, 00:03:28.179 "w_mbytes_per_sec": 0 00:03:28.179 }, 00:03:28.179 "claimed": false, 00:03:28.179 "zoned": false, 00:03:28.179 "supported_io_types": { 00:03:28.179 "read": true, 00:03:28.179 "write": true, 00:03:28.179 "unmap": true, 00:03:28.179 "write_zeroes": true, 00:03:28.179 "flush": true, 00:03:28.179 "reset": true, 00:03:28.179 "compare": false, 00:03:28.179 "compare_and_write": false, 00:03:28.179 "abort": true, 00:03:28.179 "nvme_admin": false, 00:03:28.179 "nvme_io": false 00:03:28.179 }, 00:03:28.179 "memory_domains": [ 00:03:28.179 { 00:03:28.179 "dma_device_id": "system", 00:03:28.180 "dma_device_type": 1 00:03:28.180 }, 00:03:28.180 { 00:03:28.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.180 "dma_device_type": 2 00:03:28.180 } 00:03:28.180 ], 00:03:28.180 "driver_specific": {} 00:03:28.180 } 00:03:28.180 ]' 00:03:28.180 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:28.180 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.180 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:28.180 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.180 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.437 [2024-05-15 04:04:16.194923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:28.437 [2024-05-15 04:04:16.194996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.437 [2024-05-15 04:04:16.195024] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17357b0 00:03:28.437 [2024-05-15 04:04:16.195040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.437 [2024-05-15 04:04:16.196324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.437 [2024-05-15 04:04:16.196352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.437 Passthru0 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.437 { 00:03:28.437 "name": "Malloc2", 00:03:28.437 "aliases": [ 00:03:28.437 "d828c7c5-f802-428f-b624-3e2865d1dea5" 00:03:28.437 ], 00:03:28.437 "product_name": "Malloc disk", 00:03:28.437 "block_size": 512, 00:03:28.437 "num_blocks": 16384, 00:03:28.437 "uuid": "d828c7c5-f802-428f-b624-3e2865d1dea5", 00:03:28.437 "assigned_rate_limits": { 00:03:28.437 "rw_ios_per_sec": 0, 00:03:28.437 "rw_mbytes_per_sec": 0, 00:03:28.437 "r_mbytes_per_sec": 0, 00:03:28.437 "w_mbytes_per_sec": 0 00:03:28.437 }, 00:03:28.437 "claimed": true, 00:03:28.437 "claim_type": "exclusive_write", 00:03:28.437 "zoned": false, 00:03:28.437 "supported_io_types": { 00:03:28.437 "read": true, 00:03:28.437 "write": true, 00:03:28.437 "unmap": true, 00:03:28.437 "write_zeroes": true, 00:03:28.437 "flush": true, 00:03:28.437 "reset": true, 00:03:28.437 "compare": false, 00:03:28.437 "compare_and_write": false, 00:03:28.437 "abort": true, 00:03:28.437 "nvme_admin": false, 00:03:28.437 "nvme_io": false 00:03:28.437 }, 00:03:28.437 "memory_domains": [ 00:03:28.437 { 00:03:28.437 "dma_device_id": "system", 00:03:28.437 "dma_device_type": 1 00:03:28.437 }, 00:03:28.437 { 00:03:28.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.437 "dma_device_type": 2 00:03:28.437 } 00:03:28.437 ], 00:03:28.437 "driver_specific": {} 00:03:28.437 }, 00:03:28.437 { 00:03:28.437 "name": "Passthru0", 00:03:28.437 "aliases": [ 00:03:28.437 "efbf0ac3-233a-506b-8656-33019c7ff7e1" 00:03:28.437 ], 00:03:28.437 "product_name": "passthru", 00:03:28.437 "block_size": 512, 00:03:28.437 "num_blocks": 16384, 00:03:28.437 "uuid": "efbf0ac3-233a-506b-8656-33019c7ff7e1", 00:03:28.437 "assigned_rate_limits": { 00:03:28.437 "rw_ios_per_sec": 0, 00:03:28.437 "rw_mbytes_per_sec": 0, 00:03:28.437 "r_mbytes_per_sec": 0, 00:03:28.437 "w_mbytes_per_sec": 0 00:03:28.437 }, 00:03:28.437 "claimed": false, 00:03:28.437 "zoned": false, 00:03:28.437 "supported_io_types": { 00:03:28.437 "read": true, 00:03:28.437 "write": true, 00:03:28.437 "unmap": true, 00:03:28.437 "write_zeroes": true, 00:03:28.437 "flush": true, 00:03:28.437 "reset": true, 00:03:28.437 "compare": false, 00:03:28.437 "compare_and_write": false, 00:03:28.437 "abort": true, 00:03:28.437 "nvme_admin": false, 00:03:28.437 "nvme_io": false 00:03:28.437 }, 00:03:28.437 "memory_domains": [ 00:03:28.437 { 00:03:28.437 "dma_device_id": "system", 00:03:28.437 "dma_device_type": 1 00:03:28.437 }, 00:03:28.437 { 00:03:28.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.437 "dma_device_type": 2 00:03:28.437 } 00:03:28.437 ], 00:03:28.437 "driver_specific": { 00:03:28.437 "passthru": { 00:03:28.437 "name": "Passthru0", 00:03:28.437 "base_bdev_name": "Malloc2" 00:03:28.437 } 00:03:28.437 } 00:03:28.437 } 00:03:28.437 ]' 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.437 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.438 00:03:28.438 real 0m0.223s 00:03:28.438 user 0m0.151s 00:03:28.438 sys 0m0.019s 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:28.438 04:04:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.438 ************************************ 00:03:28.438 END TEST rpc_daemon_integrity 00:03:28.438 ************************************ 00:03:28.438 04:04:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:28.438 04:04:16 rpc -- rpc/rpc.sh@84 -- # killprocess 3242835 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@946 -- # '[' -z 3242835 ']' 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@950 -- # kill -0 3242835 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@951 -- # uname 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3242835 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3242835' 00:03:28.438 killing process with pid 3242835 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@965 -- # kill 3242835 00:03:28.438 04:04:16 rpc -- common/autotest_common.sh@970 -- # wait 3242835 00:03:29.002 00:03:29.003 real 0m2.011s 00:03:29.003 user 0m2.511s 00:03:29.003 sys 0m0.600s 00:03:29.003 04:04:16 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:29.003 04:04:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.003 ************************************ 00:03:29.003 END TEST rpc 00:03:29.003 ************************************ 00:03:29.003 04:04:16 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:29.003 04:04:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:29.003 04:04:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:29.003 04:04:16 -- common/autotest_common.sh@10 -- # set +x 00:03:29.003 ************************************ 00:03:29.003 START TEST skip_rpc 00:03:29.003 ************************************ 00:03:29.003 04:04:16 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:29.003 * Looking for test storage... 00:03:29.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:29.003 04:04:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.003 04:04:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:29.003 04:04:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:29.003 04:04:16 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:29.003 04:04:16 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:29.003 04:04:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.003 ************************************ 00:03:29.003 START TEST skip_rpc 00:03:29.003 ************************************ 00:03:29.003 04:04:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:03:29.003 04:04:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3243181 00:03:29.003 04:04:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:29.003 04:04:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:29.003 04:04:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:29.003 [2024-05-15 04:04:16.998968] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:29.003 [2024-05-15 04:04:16.999076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243181 ] 00:03:29.260 EAL: No free 2048 kB hugepages reported on node 1 00:03:29.260 [2024-05-15 04:04:17.067996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.260 [2024-05-15 04:04:17.183076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3243181 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3243181 ']' 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3243181 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3243181 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3243181' 00:03:34.523 killing process with pid 3243181 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3243181 00:03:34.523 04:04:21 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3243181 00:03:34.523 00:03:34.523 real 0m5.491s 00:03:34.523 user 0m5.148s 00:03:34.523 sys 0m0.351s 00:03:34.523 04:04:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:34.523 04:04:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.523 ************************************ 00:03:34.523 END TEST skip_rpc 00:03:34.523 ************************************ 00:03:34.523 04:04:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:34.523 04:04:22 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:34.523 04:04:22 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:34.523 04:04:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.523 ************************************ 00:03:34.523 START TEST skip_rpc_with_json 00:03:34.523 ************************************ 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3243870 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3243870 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3243870 ']' 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:34.523 04:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.783 [2024-05-15 04:04:22.546251] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:34.783 [2024-05-15 04:04:22.546349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243870 ] 00:03:34.783 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.783 [2024-05-15 04:04:22.617574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.783 [2024-05-15 04:04:22.734975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.719 [2024-05-15 04:04:23.483727] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:35.719 request: 00:03:35.719 { 00:03:35.719 "trtype": "tcp", 00:03:35.719 "method": "nvmf_get_transports", 00:03:35.719 "req_id": 1 00:03:35.719 } 00:03:35.719 Got JSON-RPC error response 00:03:35.719 response: 00:03:35.719 { 00:03:35.719 "code": -19, 00:03:35.719 "message": "No such device" 00:03:35.719 } 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.719 [2024-05-15 04:04:23.491846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:35.719 04:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.719 { 00:03:35.719 "subsystems": [ 00:03:35.719 { 00:03:35.719 "subsystem": "vfio_user_target", 00:03:35.719 "config": null 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "keyring", 00:03:35.719 "config": [] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "iobuf", 00:03:35.719 "config": [ 00:03:35.719 { 00:03:35.719 "method": "iobuf_set_options", 00:03:35.719 "params": { 00:03:35.719 "small_pool_count": 8192, 00:03:35.719 "large_pool_count": 1024, 00:03:35.719 "small_bufsize": 8192, 00:03:35.719 "large_bufsize": 135168 00:03:35.719 } 00:03:35.719 } 00:03:35.719 ] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "sock", 00:03:35.719 "config": [ 00:03:35.719 { 00:03:35.719 "method": "sock_impl_set_options", 00:03:35.719 "params": { 00:03:35.719 "impl_name": "posix", 00:03:35.719 "recv_buf_size": 2097152, 00:03:35.719 "send_buf_size": 2097152, 00:03:35.719 "enable_recv_pipe": true, 00:03:35.719 "enable_quickack": false, 00:03:35.719 "enable_placement_id": 0, 00:03:35.719 "enable_zerocopy_send_server": true, 00:03:35.719 "enable_zerocopy_send_client": false, 00:03:35.719 "zerocopy_threshold": 0, 00:03:35.719 "tls_version": 0, 00:03:35.719 "enable_ktls": false 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "sock_impl_set_options", 00:03:35.719 "params": { 00:03:35.719 "impl_name": "ssl", 00:03:35.719 "recv_buf_size": 4096, 00:03:35.719 "send_buf_size": 4096, 00:03:35.719 "enable_recv_pipe": true, 00:03:35.719 "enable_quickack": false, 00:03:35.719 "enable_placement_id": 0, 00:03:35.719 "enable_zerocopy_send_server": true, 00:03:35.719 "enable_zerocopy_send_client": false, 00:03:35.719 "zerocopy_threshold": 0, 00:03:35.719 "tls_version": 0, 00:03:35.719 "enable_ktls": false 00:03:35.719 } 00:03:35.719 } 00:03:35.719 ] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "vmd", 00:03:35.719 "config": [] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "accel", 00:03:35.719 "config": [ 00:03:35.719 { 00:03:35.719 "method": "accel_set_options", 00:03:35.719 "params": { 00:03:35.719 "small_cache_size": 128, 00:03:35.719 "large_cache_size": 16, 00:03:35.719 "task_count": 2048, 00:03:35.719 "sequence_count": 2048, 00:03:35.719 "buf_count": 2048 00:03:35.719 } 00:03:35.719 } 00:03:35.719 ] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "bdev", 00:03:35.719 "config": [ 00:03:35.719 { 00:03:35.719 "method": "bdev_set_options", 00:03:35.719 "params": { 00:03:35.719 "bdev_io_pool_size": 65535, 00:03:35.719 "bdev_io_cache_size": 256, 00:03:35.719 "bdev_auto_examine": true, 00:03:35.719 "iobuf_small_cache_size": 128, 00:03:35.719 "iobuf_large_cache_size": 16 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "bdev_raid_set_options", 00:03:35.719 "params": { 00:03:35.719 "process_window_size_kb": 1024 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "bdev_iscsi_set_options", 00:03:35.719 "params": { 00:03:35.719 "timeout_sec": 30 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "bdev_nvme_set_options", 00:03:35.719 "params": { 00:03:35.719 "action_on_timeout": "none", 00:03:35.719 "timeout_us": 0, 00:03:35.719 "timeout_admin_us": 0, 00:03:35.719 "keep_alive_timeout_ms": 10000, 00:03:35.719 "arbitration_burst": 0, 00:03:35.719 "low_priority_weight": 0, 00:03:35.719 "medium_priority_weight": 0, 00:03:35.719 "high_priority_weight": 0, 00:03:35.719 "nvme_adminq_poll_period_us": 10000, 00:03:35.719 "nvme_ioq_poll_period_us": 0, 00:03:35.719 "io_queue_requests": 0, 00:03:35.719 "delay_cmd_submit": true, 00:03:35.719 "transport_retry_count": 4, 00:03:35.719 "bdev_retry_count": 3, 00:03:35.719 "transport_ack_timeout": 0, 00:03:35.719 "ctrlr_loss_timeout_sec": 0, 00:03:35.719 "reconnect_delay_sec": 0, 00:03:35.719 "fast_io_fail_timeout_sec": 0, 00:03:35.719 "disable_auto_failback": false, 00:03:35.719 "generate_uuids": false, 00:03:35.719 "transport_tos": 0, 00:03:35.719 "nvme_error_stat": false, 00:03:35.719 "rdma_srq_size": 0, 00:03:35.719 "io_path_stat": false, 00:03:35.719 "allow_accel_sequence": false, 00:03:35.719 "rdma_max_cq_size": 0, 00:03:35.719 "rdma_cm_event_timeout_ms": 0, 00:03:35.719 "dhchap_digests": [ 00:03:35.719 "sha256", 00:03:35.719 "sha384", 00:03:35.719 "sha512" 00:03:35.719 ], 00:03:35.719 "dhchap_dhgroups": [ 00:03:35.719 "null", 00:03:35.719 "ffdhe2048", 00:03:35.719 "ffdhe3072", 00:03:35.719 "ffdhe4096", 00:03:35.719 "ffdhe6144", 00:03:35.719 "ffdhe8192" 00:03:35.719 ] 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "bdev_nvme_set_hotplug", 00:03:35.719 "params": { 00:03:35.719 "period_us": 100000, 00:03:35.719 "enable": false 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "bdev_wait_for_examine" 00:03:35.719 } 00:03:35.719 ] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "scsi", 00:03:35.719 "config": null 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "scheduler", 00:03:35.719 "config": [ 00:03:35.719 { 00:03:35.719 "method": "framework_set_scheduler", 00:03:35.719 "params": { 00:03:35.719 "name": "static" 00:03:35.719 } 00:03:35.719 } 00:03:35.719 ] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "vhost_scsi", 00:03:35.719 "config": [] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "vhost_blk", 00:03:35.719 "config": [] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "ublk", 00:03:35.719 "config": [] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "nbd", 00:03:35.719 "config": [] 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "subsystem": "nvmf", 00:03:35.719 "config": [ 00:03:35.719 { 00:03:35.719 "method": "nvmf_set_config", 00:03:35.719 "params": { 00:03:35.719 "discovery_filter": "match_any", 00:03:35.719 "admin_cmd_passthru": { 00:03:35.719 "identify_ctrlr": false 00:03:35.719 } 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "nvmf_set_max_subsystems", 00:03:35.719 "params": { 00:03:35.719 "max_subsystems": 1024 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "nvmf_set_crdt", 00:03:35.719 "params": { 00:03:35.719 "crdt1": 0, 00:03:35.719 "crdt2": 0, 00:03:35.719 "crdt3": 0 00:03:35.719 } 00:03:35.719 }, 00:03:35.719 { 00:03:35.719 "method": "nvmf_create_transport", 00:03:35.719 "params": { 00:03:35.719 "trtype": "TCP", 00:03:35.719 "max_queue_depth": 128, 00:03:35.719 "max_io_qpairs_per_ctrlr": 127, 00:03:35.720 "in_capsule_data_size": 4096, 00:03:35.720 "max_io_size": 131072, 00:03:35.720 "io_unit_size": 131072, 00:03:35.720 "max_aq_depth": 128, 00:03:35.720 "num_shared_buffers": 511, 00:03:35.720 "buf_cache_size": 4294967295, 00:03:35.720 "dif_insert_or_strip": false, 00:03:35.720 "zcopy": false, 00:03:35.720 "c2h_success": true, 00:03:35.720 "sock_priority": 0, 00:03:35.720 "abort_timeout_sec": 1, 00:03:35.720 "ack_timeout": 0, 00:03:35.720 "data_wr_pool_size": 0 00:03:35.720 } 00:03:35.720 } 00:03:35.720 ] 00:03:35.720 }, 00:03:35.720 { 00:03:35.720 "subsystem": "iscsi", 00:03:35.720 "config": [ 00:03:35.720 { 00:03:35.720 "method": "iscsi_set_options", 00:03:35.720 "params": { 00:03:35.720 "node_base": "iqn.2016-06.io.spdk", 00:03:35.720 "max_sessions": 128, 00:03:35.720 "max_connections_per_session": 2, 00:03:35.720 "max_queue_depth": 64, 00:03:35.720 "default_time2wait": 2, 00:03:35.720 "default_time2retain": 20, 00:03:35.720 "first_burst_length": 8192, 00:03:35.720 "immediate_data": true, 00:03:35.720 "allow_duplicated_isid": false, 00:03:35.720 "error_recovery_level": 0, 00:03:35.720 "nop_timeout": 60, 00:03:35.720 "nop_in_interval": 30, 00:03:35.720 "disable_chap": false, 00:03:35.720 "require_chap": false, 00:03:35.720 "mutual_chap": false, 00:03:35.720 "chap_group": 0, 00:03:35.720 "max_large_datain_per_connection": 64, 00:03:35.720 "max_r2t_per_connection": 4, 00:03:35.720 "pdu_pool_size": 36864, 00:03:35.720 "immediate_data_pool_size": 16384, 00:03:35.720 "data_out_pool_size": 2048 00:03:35.720 } 00:03:35.720 } 00:03:35.720 ] 00:03:35.720 } 00:03:35.720 ] 00:03:35.720 } 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3243870 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3243870 ']' 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3243870 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3243870 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3243870' 00:03:35.720 killing process with pid 3243870 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3243870 00:03:35.720 04:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3243870 00:03:36.286 04:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3244134 00:03:36.286 04:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.286 04:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3244134 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3244134 ']' 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3244134 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3244134 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3244134' 00:03:41.594 killing process with pid 3244134 00:03:41.594 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3244134 00:03:41.595 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3244134 00:03:41.595 04:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.595 04:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.595 00:03:41.595 real 0m7.110s 00:03:41.595 user 0m6.865s 00:03:41.595 sys 0m0.760s 00:03:41.595 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.595 04:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.595 ************************************ 00:03:41.595 END TEST skip_rpc_with_json 00:03:41.595 ************************************ 00:03:41.854 04:04:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:41.854 04:04:29 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.854 04:04:29 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.854 04:04:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.854 ************************************ 00:03:41.854 START TEST skip_rpc_with_delay 00:03:41.854 ************************************ 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.854 [2024-05-15 04:04:29.708277] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:41.854 [2024-05-15 04:04:29.708393] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:41.854 00:03:41.854 real 0m0.065s 00:03:41.854 user 0m0.036s 00:03:41.854 sys 0m0.029s 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.854 04:04:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:41.854 ************************************ 00:03:41.854 END TEST skip_rpc_with_delay 00:03:41.854 ************************************ 00:03:41.854 04:04:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:41.854 04:04:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:41.854 04:04:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:41.854 04:04:29 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.854 04:04:29 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.854 04:04:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.854 ************************************ 00:03:41.854 START TEST exit_on_failed_rpc_init 00:03:41.854 ************************************ 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3244854 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3244854 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3244854 ']' 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:41.854 04:04:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.854 [2024-05-15 04:04:29.825838] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:41.854 [2024-05-15 04:04:29.825937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244854 ] 00:03:41.854 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.113 [2024-05-15 04:04:29.899710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.113 [2024-05-15 04:04:30.024256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:42.371 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.371 [2024-05-15 04:04:30.339661] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:42.371 [2024-05-15 04:04:30.339734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244870 ] 00:03:42.371 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.630 [2024-05-15 04:04:30.412105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.630 [2024-05-15 04:04:30.532092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:42.630 [2024-05-15 04:04:30.532216] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:42.630 [2024-05-15 04:04:30.532236] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:42.630 [2024-05-15 04:04:30.532247] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3244854 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3244854 ']' 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3244854 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3244854 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3244854' 00:03:42.888 killing process with pid 3244854 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3244854 00:03:42.888 04:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3244854 00:03:43.147 00:03:43.147 real 0m1.374s 00:03:43.147 user 0m1.565s 00:03:43.147 sys 0m0.475s 00:03:43.147 04:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:43.147 04:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.147 ************************************ 00:03:43.147 END TEST exit_on_failed_rpc_init 00:03:43.147 ************************************ 00:03:43.406 04:04:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.406 00:03:43.406 real 0m14.297s 00:03:43.406 user 0m13.717s 00:03:43.406 sys 0m1.776s 00:03:43.406 04:04:31 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:43.406 04:04:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.406 ************************************ 00:03:43.406 END TEST skip_rpc 00:03:43.406 ************************************ 00:03:43.406 04:04:31 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:43.406 04:04:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:43.406 04:04:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:43.406 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:03:43.406 ************************************ 00:03:43.406 START TEST rpc_client 00:03:43.406 ************************************ 00:03:43.406 04:04:31 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:43.406 * Looking for test storage... 00:03:43.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:43.406 04:04:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:43.406 OK 00:03:43.406 04:04:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:43.406 00:03:43.406 real 0m0.068s 00:03:43.406 user 0m0.035s 00:03:43.406 sys 0m0.039s 00:03:43.406 04:04:31 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:43.406 04:04:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:43.406 ************************************ 00:03:43.406 END TEST rpc_client 00:03:43.406 ************************************ 00:03:43.406 04:04:31 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:43.406 04:04:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:43.406 04:04:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:43.406 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:03:43.406 ************************************ 00:03:43.406 START TEST json_config 00:03:43.406 ************************************ 00:03:43.406 04:04:31 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:43.406 04:04:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:43.406 04:04:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:43.407 04:04:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.407 04:04:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.407 04:04:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.407 04:04:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.407 04:04:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.407 04:04:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.407 04:04:31 json_config -- paths/export.sh@5 -- # export PATH 00:03:43.407 04:04:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@47 -- # : 0 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:43.407 04:04:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:43.407 INFO: JSON configuration test init 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.407 04:04:31 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:43.407 04:04:31 json_config -- json_config/common.sh@9 -- # local app=target 00:03:43.407 04:04:31 json_config -- json_config/common.sh@10 -- # shift 00:03:43.407 04:04:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:43.407 04:04:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:43.407 04:04:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:43.407 04:04:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.407 04:04:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.407 04:04:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3245112 00:03:43.407 04:04:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:43.407 04:04:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:43.407 Waiting for target to run... 00:03:43.407 04:04:31 json_config -- json_config/common.sh@25 -- # waitforlisten 3245112 /var/tmp/spdk_tgt.sock 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@827 -- # '[' -z 3245112 ']' 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:43.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:43.407 04:04:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.665 [2024-05-15 04:04:31.457095] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:43.665 [2024-05-15 04:04:31.457188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245112 ] 00:03:43.665 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.923 [2024-05-15 04:04:31.808885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.923 [2024-05-15 04:04:31.897911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.490 04:04:32 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:44.490 04:04:32 json_config -- common/autotest_common.sh@860 -- # return 0 00:03:44.490 04:04:32 json_config -- json_config/common.sh@26 -- # echo '' 00:03:44.490 00:03:44.490 04:04:32 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:44.490 04:04:32 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:44.490 04:04:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:44.490 04:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.490 04:04:32 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:44.490 04:04:32 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:44.490 04:04:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.490 04:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.490 04:04:32 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:44.490 04:04:32 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:44.490 04:04:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:47.771 04:04:35 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:47.771 04:04:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:47.771 04:04:35 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:47.771 04:04:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.771 04:04:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:47.771 04:04:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:47.771 04:04:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:47.771 04:04:35 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:47.771 04:04:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:47.771 04:04:35 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:48.028 04:04:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.028 04:04:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:48.028 04:04:35 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:48.028 04:04:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.028 04:04:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:48.029 04:04:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:48.029 04:04:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:48.029 04:04:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:48.029 04:04:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:48.287 MallocForNvmf0 00:03:48.287 04:04:36 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:48.287 04:04:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:48.546 MallocForNvmf1 00:03:48.546 04:04:36 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:48.546 04:04:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:48.546 [2024-05-15 04:04:36.556420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:48.805 04:04:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.805 04:04:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:49.063 04:04:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:49.063 04:04:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:49.063 04:04:37 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:49.063 04:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:49.321 04:04:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:49.321 04:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:49.579 [2024-05-15 04:04:37.543130] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:03:49.579 [2024-05-15 04:04:37.543744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:49.580 04:04:37 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:49.580 04:04:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.580 04:04:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.580 04:04:37 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:49.580 04:04:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.580 04:04:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.838 04:04:37 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:49.838 04:04:37 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.838 04:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.838 MallocBdevForConfigChangeCheck 00:03:49.838 04:04:37 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:49.838 04:04:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.838 04:04:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.096 04:04:37 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:50.096 04:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:50.354 04:04:38 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:50.354 INFO: shutting down applications... 00:03:50.354 04:04:38 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:50.354 04:04:38 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:50.354 04:04:38 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:50.354 04:04:38 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:52.253 Calling clear_iscsi_subsystem 00:03:52.253 Calling clear_nvmf_subsystem 00:03:52.253 Calling clear_nbd_subsystem 00:03:52.253 Calling clear_ublk_subsystem 00:03:52.253 Calling clear_vhost_blk_subsystem 00:03:52.253 Calling clear_vhost_scsi_subsystem 00:03:52.253 Calling clear_bdev_subsystem 00:03:52.253 04:04:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:52.253 04:04:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:52.253 04:04:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:52.253 04:04:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:52.253 04:04:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:52.253 04:04:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:52.511 04:04:40 json_config -- json_config/json_config.sh@345 -- # break 00:03:52.511 04:04:40 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:52.511 04:04:40 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:52.511 04:04:40 json_config -- json_config/common.sh@31 -- # local app=target 00:03:52.511 04:04:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:52.511 04:04:40 json_config -- json_config/common.sh@35 -- # [[ -n 3245112 ]] 00:03:52.511 04:04:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3245112 00:03:52.511 [2024-05-15 04:04:40.279755] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:03:52.511 04:04:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:52.511 04:04:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:52.511 04:04:40 json_config -- json_config/common.sh@41 -- # kill -0 3245112 00:03:52.511 04:04:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:52.771 04:04:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:52.771 04:04:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:52.771 04:04:40 json_config -- json_config/common.sh@41 -- # kill -0 3245112 00:03:52.771 04:04:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:52.771 04:04:40 json_config -- json_config/common.sh@43 -- # break 00:03:52.771 04:04:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:52.771 04:04:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:52.771 SPDK target shutdown done 00:03:52.771 04:04:40 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:52.771 INFO: relaunching applications... 00:03:52.771 04:04:40 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.771 04:04:40 json_config -- json_config/common.sh@9 -- # local app=target 00:03:52.771 04:04:40 json_config -- json_config/common.sh@10 -- # shift 00:03:52.771 04:04:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:52.771 04:04:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:52.771 04:04:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:53.029 04:04:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.029 04:04:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.029 04:04:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3246374 00:03:53.029 04:04:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:53.029 Waiting for target to run... 00:03:53.029 04:04:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.029 04:04:40 json_config -- json_config/common.sh@25 -- # waitforlisten 3246374 /var/tmp/spdk_tgt.sock 00:03:53.029 04:04:40 json_config -- common/autotest_common.sh@827 -- # '[' -z 3246374 ']' 00:03:53.029 04:04:40 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.029 04:04:40 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:53.029 04:04:40 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.029 04:04:40 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:53.029 04:04:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.029 [2024-05-15 04:04:40.837316] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:03:53.029 [2024-05-15 04:04:40.837426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246374 ] 00:03:53.029 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.598 [2024-05-15 04:04:41.361223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.598 [2024-05-15 04:04:41.468382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.889 [2024-05-15 04:04:44.514906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.889 [2024-05-15 04:04:44.546877] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:03:56.889 [2024-05-15 04:04:44.547423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:57.454 04:04:45 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:57.455 04:04:45 json_config -- common/autotest_common.sh@860 -- # return 0 00:03:57.455 04:04:45 json_config -- json_config/common.sh@26 -- # echo '' 00:03:57.455 00:03:57.455 04:04:45 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:57.455 04:04:45 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:57.455 INFO: Checking if target configuration is the same... 00:03:57.455 04:04:45 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.455 04:04:45 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:57.455 04:04:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.455 + '[' 2 -ne 2 ']' 00:03:57.455 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:57.455 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:57.455 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.455 +++ basename /dev/fd/62 00:03:57.455 ++ mktemp /tmp/62.XXX 00:03:57.455 + tmp_file_1=/tmp/62.cvM 00:03:57.455 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.455 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:57.455 + tmp_file_2=/tmp/spdk_tgt_config.json.pyd 00:03:57.455 + ret=0 00:03:57.455 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.718 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.718 + diff -u /tmp/62.cvM /tmp/spdk_tgt_config.json.pyd 00:03:57.718 + echo 'INFO: JSON config files are the same' 00:03:57.718 INFO: JSON config files are the same 00:03:57.718 + rm /tmp/62.cvM /tmp/spdk_tgt_config.json.pyd 00:03:57.718 + exit 0 00:03:57.718 04:04:45 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:57.718 04:04:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:57.718 INFO: changing configuration and checking if this can be detected... 00:03:57.718 04:04:45 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:57.718 04:04:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:58.005 04:04:45 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.005 04:04:45 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:58.005 04:04:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.005 + '[' 2 -ne 2 ']' 00:03:58.005 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:58.005 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:58.005 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.005 +++ basename /dev/fd/62 00:03:58.005 ++ mktemp /tmp/62.XXX 00:03:58.005 + tmp_file_1=/tmp/62.JYl 00:03:58.005 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.005 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.005 + tmp_file_2=/tmp/spdk_tgt_config.json.Jpl 00:03:58.005 + ret=0 00:03:58.005 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.268 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.526 + diff -u /tmp/62.JYl /tmp/spdk_tgt_config.json.Jpl 00:03:58.526 + ret=1 00:03:58.526 + echo '=== Start of file: /tmp/62.JYl ===' 00:03:58.526 + cat /tmp/62.JYl 00:03:58.526 + echo '=== End of file: /tmp/62.JYl ===' 00:03:58.526 + echo '' 00:03:58.526 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Jpl ===' 00:03:58.526 + cat /tmp/spdk_tgt_config.json.Jpl 00:03:58.526 + echo '=== End of file: /tmp/spdk_tgt_config.json.Jpl ===' 00:03:58.526 + echo '' 00:03:58.526 + rm /tmp/62.JYl /tmp/spdk_tgt_config.json.Jpl 00:03:58.526 + exit 1 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:58.526 INFO: configuration change detected. 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@317 -- # [[ -n 3246374 ]] 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@193 -- # uname -s 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.526 04:04:46 json_config -- json_config/json_config.sh@323 -- # killprocess 3246374 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@946 -- # '[' -z 3246374 ']' 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@950 -- # kill -0 3246374 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@951 -- # uname 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3246374 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3246374' 00:03:58.526 killing process with pid 3246374 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@965 -- # kill 3246374 00:03:58.526 [2024-05-15 04:04:46.382786] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:03:58.526 04:04:46 json_config -- common/autotest_common.sh@970 -- # wait 3246374 00:04:00.426 04:04:48 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.426 04:04:48 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:00.426 04:04:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.426 04:04:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.426 04:04:48 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:00.426 04:04:48 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:00.426 INFO: Success 00:04:00.426 00:04:00.426 real 0m16.718s 00:04:00.426 user 0m18.686s 00:04:00.426 sys 0m2.019s 00:04:00.426 04:04:48 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:00.426 04:04:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.426 ************************************ 00:04:00.426 END TEST json_config 00:04:00.426 ************************************ 00:04:00.426 04:04:48 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:00.426 04:04:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:00.426 04:04:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.426 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.426 ************************************ 00:04:00.426 START TEST json_config_extra_key 00:04:00.426 ************************************ 00:04:00.426 04:04:48 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:00.426 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:00.426 04:04:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:00.426 04:04:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:00.426 04:04:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:00.426 04:04:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.426 04:04:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.426 04:04:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.426 04:04:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:00.426 04:04:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:00.426 04:04:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:00.426 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:00.426 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:00.426 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:00.427 INFO: launching applications... 00:04:00.427 04:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3247343 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:00.427 Waiting for target to run... 00:04:00.427 04:04:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3247343 /var/tmp/spdk_tgt.sock 00:04:00.427 04:04:48 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3247343 ']' 00:04:00.427 04:04:48 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.427 04:04:48 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:00.427 04:04:48 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.427 04:04:48 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:00.427 04:04:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:00.427 [2024-05-15 04:04:48.207515] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:00.427 [2024-05-15 04:04:48.207609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247343 ] 00:04:00.427 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.684 [2024-05-15 04:04:48.560810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.684 [2024-05-15 04:04:48.652734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.249 04:04:49 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:01.249 04:04:49 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:01.249 00:04:01.249 04:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:01.249 INFO: shutting down applications... 00:04:01.249 04:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3247343 ]] 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3247343 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3247343 00:04:01.249 04:04:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.814 04:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.814 04:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.814 04:04:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3247343 00:04:01.814 04:04:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:02.380 04:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:02.380 04:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.380 04:04:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3247343 00:04:02.380 04:04:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:02.380 04:04:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:02.380 04:04:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:02.380 04:04:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:02.380 SPDK target shutdown done 00:04:02.380 04:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:02.380 Success 00:04:02.380 00:04:02.380 real 0m2.027s 00:04:02.380 user 0m1.535s 00:04:02.380 sys 0m0.430s 00:04:02.380 04:04:50 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.380 04:04:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.380 ************************************ 00:04:02.380 END TEST json_config_extra_key 00:04:02.380 ************************************ 00:04:02.380 04:04:50 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.380 04:04:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.380 04:04:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.380 04:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:02.380 ************************************ 00:04:02.380 START TEST alias_rpc 00:04:02.380 ************************************ 00:04:02.380 04:04:50 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.380 * Looking for test storage... 00:04:02.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:02.380 04:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:02.380 04:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3247656 00:04:02.380 04:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.380 04:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3247656 00:04:02.380 04:04:50 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3247656 ']' 00:04:02.380 04:04:50 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.380 04:04:50 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:02.380 04:04:50 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.380 04:04:50 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:02.380 04:04:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.380 [2024-05-15 04:04:50.293869] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:02.380 [2024-05-15 04:04:50.293970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247656 ] 00:04:02.380 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.380 [2024-05-15 04:04:50.362641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.638 [2024-05-15 04:04:50.473069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.895 04:04:50 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:02.895 04:04:50 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:02.895 04:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:03.153 04:04:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3247656 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3247656 ']' 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3247656 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3247656 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3247656' 00:04:03.153 killing process with pid 3247656 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@965 -- # kill 3247656 00:04:03.153 04:04:51 alias_rpc -- common/autotest_common.sh@970 -- # wait 3247656 00:04:03.720 00:04:03.720 real 0m1.308s 00:04:03.720 user 0m1.376s 00:04:03.720 sys 0m0.442s 00:04:03.720 04:04:51 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.720 04:04:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.720 ************************************ 00:04:03.720 END TEST alias_rpc 00:04:03.720 ************************************ 00:04:03.720 04:04:51 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:03.720 04:04:51 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.720 04:04:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:03.720 04:04:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:03.720 04:04:51 -- common/autotest_common.sh@10 -- # set +x 00:04:03.720 ************************************ 00:04:03.720 START TEST spdkcli_tcp 00:04:03.720 ************************************ 00:04:03.720 04:04:51 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.720 * Looking for test storage... 00:04:03.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:03.720 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3247850 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:03.721 04:04:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3247850 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3247850 ']' 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:03.721 04:04:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.721 [2024-05-15 04:04:51.654791] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:03.721 [2024-05-15 04:04:51.654871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247850 ] 00:04:03.721 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.721 [2024-05-15 04:04:51.720808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.980 [2024-05-15 04:04:51.830410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.980 [2024-05-15 04:04:51.830414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.915 04:04:52 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:04.915 04:04:52 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:04.915 04:04:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3247987 00:04:04.915 04:04:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:04.915 04:04:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:04.915 [ 00:04:04.915 "bdev_malloc_delete", 00:04:04.915 "bdev_malloc_create", 00:04:04.915 "bdev_null_resize", 00:04:04.915 "bdev_null_delete", 00:04:04.915 "bdev_null_create", 00:04:04.915 "bdev_nvme_cuse_unregister", 00:04:04.915 "bdev_nvme_cuse_register", 00:04:04.915 "bdev_opal_new_user", 00:04:04.915 "bdev_opal_set_lock_state", 00:04:04.915 "bdev_opal_delete", 00:04:04.915 "bdev_opal_get_info", 00:04:04.915 "bdev_opal_create", 00:04:04.915 "bdev_nvme_opal_revert", 00:04:04.915 "bdev_nvme_opal_init", 00:04:04.915 "bdev_nvme_send_cmd", 00:04:04.915 "bdev_nvme_get_path_iostat", 00:04:04.915 "bdev_nvme_get_mdns_discovery_info", 00:04:04.915 "bdev_nvme_stop_mdns_discovery", 00:04:04.915 "bdev_nvme_start_mdns_discovery", 00:04:04.915 "bdev_nvme_set_multipath_policy", 00:04:04.915 "bdev_nvme_set_preferred_path", 00:04:04.915 "bdev_nvme_get_io_paths", 00:04:04.915 "bdev_nvme_remove_error_injection", 00:04:04.915 "bdev_nvme_add_error_injection", 00:04:04.915 "bdev_nvme_get_discovery_info", 00:04:04.915 "bdev_nvme_stop_discovery", 00:04:04.915 "bdev_nvme_start_discovery", 00:04:04.915 "bdev_nvme_get_controller_health_info", 00:04:04.915 "bdev_nvme_disable_controller", 00:04:04.915 "bdev_nvme_enable_controller", 00:04:04.915 "bdev_nvme_reset_controller", 00:04:04.915 "bdev_nvme_get_transport_statistics", 00:04:04.915 "bdev_nvme_apply_firmware", 00:04:04.915 "bdev_nvme_detach_controller", 00:04:04.915 "bdev_nvme_get_controllers", 00:04:04.915 "bdev_nvme_attach_controller", 00:04:04.915 "bdev_nvme_set_hotplug", 00:04:04.915 "bdev_nvme_set_options", 00:04:04.915 "bdev_passthru_delete", 00:04:04.915 "bdev_passthru_create", 00:04:04.915 "bdev_lvol_check_shallow_copy", 00:04:04.915 "bdev_lvol_start_shallow_copy", 00:04:04.915 "bdev_lvol_grow_lvstore", 00:04:04.915 "bdev_lvol_get_lvols", 00:04:04.915 "bdev_lvol_get_lvstores", 00:04:04.915 "bdev_lvol_delete", 00:04:04.915 "bdev_lvol_set_read_only", 00:04:04.915 "bdev_lvol_resize", 00:04:04.915 "bdev_lvol_decouple_parent", 00:04:04.915 "bdev_lvol_inflate", 00:04:04.915 "bdev_lvol_rename", 00:04:04.915 "bdev_lvol_clone_bdev", 00:04:04.915 "bdev_lvol_clone", 00:04:04.915 "bdev_lvol_snapshot", 00:04:04.915 "bdev_lvol_create", 00:04:04.915 "bdev_lvol_delete_lvstore", 00:04:04.915 "bdev_lvol_rename_lvstore", 00:04:04.915 "bdev_lvol_create_lvstore", 00:04:04.915 "bdev_raid_set_options", 00:04:04.915 "bdev_raid_remove_base_bdev", 00:04:04.915 "bdev_raid_add_base_bdev", 00:04:04.915 "bdev_raid_delete", 00:04:04.915 "bdev_raid_create", 00:04:04.915 "bdev_raid_get_bdevs", 00:04:04.915 "bdev_error_inject_error", 00:04:04.915 "bdev_error_delete", 00:04:04.915 "bdev_error_create", 00:04:04.915 "bdev_split_delete", 00:04:04.915 "bdev_split_create", 00:04:04.915 "bdev_delay_delete", 00:04:04.915 "bdev_delay_create", 00:04:04.915 "bdev_delay_update_latency", 00:04:04.915 "bdev_zone_block_delete", 00:04:04.915 "bdev_zone_block_create", 00:04:04.915 "blobfs_create", 00:04:04.915 "blobfs_detect", 00:04:04.915 "blobfs_set_cache_size", 00:04:04.915 "bdev_aio_delete", 00:04:04.915 "bdev_aio_rescan", 00:04:04.915 "bdev_aio_create", 00:04:04.915 "bdev_ftl_set_property", 00:04:04.915 "bdev_ftl_get_properties", 00:04:04.915 "bdev_ftl_get_stats", 00:04:04.915 "bdev_ftl_unmap", 00:04:04.915 "bdev_ftl_unload", 00:04:04.915 "bdev_ftl_delete", 00:04:04.915 "bdev_ftl_load", 00:04:04.915 "bdev_ftl_create", 00:04:04.915 "bdev_virtio_attach_controller", 00:04:04.915 "bdev_virtio_scsi_get_devices", 00:04:04.915 "bdev_virtio_detach_controller", 00:04:04.915 "bdev_virtio_blk_set_hotplug", 00:04:04.915 "bdev_iscsi_delete", 00:04:04.915 "bdev_iscsi_create", 00:04:04.915 "bdev_iscsi_set_options", 00:04:04.915 "accel_error_inject_error", 00:04:04.915 "ioat_scan_accel_module", 00:04:04.915 "dsa_scan_accel_module", 00:04:04.915 "iaa_scan_accel_module", 00:04:04.915 "vfu_virtio_create_scsi_endpoint", 00:04:04.915 "vfu_virtio_scsi_remove_target", 00:04:04.915 "vfu_virtio_scsi_add_target", 00:04:04.915 "vfu_virtio_create_blk_endpoint", 00:04:04.915 "vfu_virtio_delete_endpoint", 00:04:04.915 "keyring_file_remove_key", 00:04:04.915 "keyring_file_add_key", 00:04:04.915 "iscsi_get_histogram", 00:04:04.915 "iscsi_enable_histogram", 00:04:04.915 "iscsi_set_options", 00:04:04.915 "iscsi_get_auth_groups", 00:04:04.915 "iscsi_auth_group_remove_secret", 00:04:04.915 "iscsi_auth_group_add_secret", 00:04:04.915 "iscsi_delete_auth_group", 00:04:04.915 "iscsi_create_auth_group", 00:04:04.915 "iscsi_set_discovery_auth", 00:04:04.915 "iscsi_get_options", 00:04:04.915 "iscsi_target_node_request_logout", 00:04:04.915 "iscsi_target_node_set_redirect", 00:04:04.915 "iscsi_target_node_set_auth", 00:04:04.915 "iscsi_target_node_add_lun", 00:04:04.915 "iscsi_get_stats", 00:04:04.915 "iscsi_get_connections", 00:04:04.915 "iscsi_portal_group_set_auth", 00:04:04.915 "iscsi_start_portal_group", 00:04:04.915 "iscsi_delete_portal_group", 00:04:04.915 "iscsi_create_portal_group", 00:04:04.915 "iscsi_get_portal_groups", 00:04:04.915 "iscsi_delete_target_node", 00:04:04.915 "iscsi_target_node_remove_pg_ig_maps", 00:04:04.915 "iscsi_target_node_add_pg_ig_maps", 00:04:04.915 "iscsi_create_target_node", 00:04:04.915 "iscsi_get_target_nodes", 00:04:04.915 "iscsi_delete_initiator_group", 00:04:04.915 "iscsi_initiator_group_remove_initiators", 00:04:04.916 "iscsi_initiator_group_add_initiators", 00:04:04.916 "iscsi_create_initiator_group", 00:04:04.916 "iscsi_get_initiator_groups", 00:04:04.916 "nvmf_set_crdt", 00:04:04.916 "nvmf_set_config", 00:04:04.916 "nvmf_set_max_subsystems", 00:04:04.916 "nvmf_stop_mdns_prr", 00:04:04.916 "nvmf_publish_mdns_prr", 00:04:04.916 "nvmf_subsystem_get_listeners", 00:04:04.916 "nvmf_subsystem_get_qpairs", 00:04:04.916 "nvmf_subsystem_get_controllers", 00:04:04.916 "nvmf_get_stats", 00:04:04.916 "nvmf_get_transports", 00:04:04.916 "nvmf_create_transport", 00:04:04.916 "nvmf_get_targets", 00:04:04.916 "nvmf_delete_target", 00:04:04.916 "nvmf_create_target", 00:04:04.916 "nvmf_subsystem_allow_any_host", 00:04:04.916 "nvmf_subsystem_remove_host", 00:04:04.916 "nvmf_subsystem_add_host", 00:04:04.916 "nvmf_ns_remove_host", 00:04:04.916 "nvmf_ns_add_host", 00:04:04.916 "nvmf_subsystem_remove_ns", 00:04:04.916 "nvmf_subsystem_add_ns", 00:04:04.916 "nvmf_subsystem_listener_set_ana_state", 00:04:04.916 "nvmf_discovery_get_referrals", 00:04:04.916 "nvmf_discovery_remove_referral", 00:04:04.916 "nvmf_discovery_add_referral", 00:04:04.916 "nvmf_subsystem_remove_listener", 00:04:04.916 "nvmf_subsystem_add_listener", 00:04:04.916 "nvmf_delete_subsystem", 00:04:04.916 "nvmf_create_subsystem", 00:04:04.916 "nvmf_get_subsystems", 00:04:04.916 "env_dpdk_get_mem_stats", 00:04:04.916 "nbd_get_disks", 00:04:04.916 "nbd_stop_disk", 00:04:04.916 "nbd_start_disk", 00:04:04.916 "ublk_recover_disk", 00:04:04.916 "ublk_get_disks", 00:04:04.916 "ublk_stop_disk", 00:04:04.916 "ublk_start_disk", 00:04:04.916 "ublk_destroy_target", 00:04:04.916 "ublk_create_target", 00:04:04.916 "virtio_blk_create_transport", 00:04:04.916 "virtio_blk_get_transports", 00:04:04.916 "vhost_controller_set_coalescing", 00:04:04.916 "vhost_get_controllers", 00:04:04.916 "vhost_delete_controller", 00:04:04.916 "vhost_create_blk_controller", 00:04:04.916 "vhost_scsi_controller_remove_target", 00:04:04.916 "vhost_scsi_controller_add_target", 00:04:04.916 "vhost_start_scsi_controller", 00:04:04.916 "vhost_create_scsi_controller", 00:04:04.916 "thread_set_cpumask", 00:04:04.916 "framework_get_scheduler", 00:04:04.916 "framework_set_scheduler", 00:04:04.916 "framework_get_reactors", 00:04:04.916 "thread_get_io_channels", 00:04:04.916 "thread_get_pollers", 00:04:04.916 "thread_get_stats", 00:04:04.916 "framework_monitor_context_switch", 00:04:04.916 "spdk_kill_instance", 00:04:04.916 "log_enable_timestamps", 00:04:04.916 "log_get_flags", 00:04:04.916 "log_clear_flag", 00:04:04.916 "log_set_flag", 00:04:04.916 "log_get_level", 00:04:04.916 "log_set_level", 00:04:04.916 "log_get_print_level", 00:04:04.916 "log_set_print_level", 00:04:04.916 "framework_enable_cpumask_locks", 00:04:04.916 "framework_disable_cpumask_locks", 00:04:04.916 "framework_wait_init", 00:04:04.916 "framework_start_init", 00:04:04.916 "scsi_get_devices", 00:04:04.916 "bdev_get_histogram", 00:04:04.916 "bdev_enable_histogram", 00:04:04.916 "bdev_set_qos_limit", 00:04:04.916 "bdev_set_qd_sampling_period", 00:04:04.916 "bdev_get_bdevs", 00:04:04.916 "bdev_reset_iostat", 00:04:04.916 "bdev_get_iostat", 00:04:04.916 "bdev_examine", 00:04:04.916 "bdev_wait_for_examine", 00:04:04.916 "bdev_set_options", 00:04:04.916 "notify_get_notifications", 00:04:04.916 "notify_get_types", 00:04:04.916 "accel_get_stats", 00:04:04.916 "accel_set_options", 00:04:04.916 "accel_set_driver", 00:04:04.916 "accel_crypto_key_destroy", 00:04:04.916 "accel_crypto_keys_get", 00:04:04.916 "accel_crypto_key_create", 00:04:04.916 "accel_assign_opc", 00:04:04.916 "accel_get_module_info", 00:04:04.916 "accel_get_opc_assignments", 00:04:04.916 "vmd_rescan", 00:04:04.916 "vmd_remove_device", 00:04:04.916 "vmd_enable", 00:04:04.916 "sock_get_default_impl", 00:04:04.916 "sock_set_default_impl", 00:04:04.916 "sock_impl_set_options", 00:04:04.916 "sock_impl_get_options", 00:04:04.916 "iobuf_get_stats", 00:04:04.916 "iobuf_set_options", 00:04:04.916 "keyring_get_keys", 00:04:04.916 "framework_get_pci_devices", 00:04:04.916 "framework_get_config", 00:04:04.916 "framework_get_subsystems", 00:04:04.916 "vfu_tgt_set_base_path", 00:04:04.916 "trace_get_info", 00:04:04.916 "trace_get_tpoint_group_mask", 00:04:04.916 "trace_disable_tpoint_group", 00:04:04.916 "trace_enable_tpoint_group", 00:04:04.916 "trace_clear_tpoint_mask", 00:04:04.916 "trace_set_tpoint_mask", 00:04:04.916 "spdk_get_version", 00:04:04.916 "rpc_get_methods" 00:04:04.916 ] 00:04:04.916 04:04:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.916 04:04:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:04.916 04:04:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3247850 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3247850 ']' 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3247850 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3247850 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3247850' 00:04:04.916 killing process with pid 3247850 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3247850 00:04:04.916 04:04:52 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3247850 00:04:05.482 00:04:05.482 real 0m1.779s 00:04:05.482 user 0m3.391s 00:04:05.482 sys 0m0.486s 00:04:05.482 04:04:53 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.482 04:04:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.482 ************************************ 00:04:05.482 END TEST spdkcli_tcp 00:04:05.482 ************************************ 00:04:05.482 04:04:53 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.482 04:04:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:05.482 04:04:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.482 04:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:05.482 ************************************ 00:04:05.482 START TEST dpdk_mem_utility 00:04:05.482 ************************************ 00:04:05.482 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.482 * Looking for test storage... 00:04:05.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:05.483 04:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.483 04:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3248174 00:04:05.483 04:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.483 04:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3248174 00:04:05.483 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3248174 ']' 00:04:05.483 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.483 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:05.483 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.483 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:05.483 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.483 [2024-05-15 04:04:53.476283] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:05.483 [2024-05-15 04:04:53.476374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248174 ] 00:04:05.741 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.741 [2024-05-15 04:04:53.551896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.741 [2024-05-15 04:04:53.669257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.000 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:06.000 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:06.000 04:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:06.000 04:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:06.000 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.000 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.000 { 00:04:06.000 "filename": "/tmp/spdk_mem_dump.txt" 00:04:06.000 } 00:04:06.000 04:04:53 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.000 04:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:06.000 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:06.000 1 heaps totaling size 814.000000 MiB 00:04:06.000 size: 814.000000 MiB heap id: 0 00:04:06.000 end heaps---------- 00:04:06.000 8 mempools totaling size 598.116089 MiB 00:04:06.000 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:06.000 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:06.000 size: 84.521057 MiB name: bdev_io_3248174 00:04:06.000 size: 51.011292 MiB name: evtpool_3248174 00:04:06.000 size: 50.003479 MiB name: msgpool_3248174 00:04:06.000 size: 21.763794 MiB name: PDU_Pool 00:04:06.000 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:06.000 size: 0.026123 MiB name: Session_Pool 00:04:06.000 end mempools------- 00:04:06.000 6 memzones totaling size 4.142822 MiB 00:04:06.000 size: 1.000366 MiB name: RG_ring_0_3248174 00:04:06.000 size: 1.000366 MiB name: RG_ring_1_3248174 00:04:06.000 size: 1.000366 MiB name: RG_ring_4_3248174 00:04:06.000 size: 1.000366 MiB name: RG_ring_5_3248174 00:04:06.000 size: 0.125366 MiB name: RG_ring_2_3248174 00:04:06.000 size: 0.015991 MiB name: RG_ring_3_3248174 00:04:06.000 end memzones------- 00:04:06.000 04:04:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:06.259 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:06.259 list of free elements. size: 12.519348 MiB 00:04:06.259 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:06.259 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:06.259 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:06.259 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:06.259 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:06.259 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:06.259 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:06.259 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:06.259 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:06.259 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:06.259 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:06.259 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:06.259 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:06.259 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:06.259 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:06.259 list of standard malloc elements. size: 199.218079 MiB 00:04:06.259 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:06.259 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:06.259 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:06.259 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:06.259 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:06.259 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:06.259 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:06.259 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:06.259 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:06.259 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:06.259 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:06.259 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:06.259 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:06.259 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:06.259 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:06.259 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:06.259 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:06.259 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:06.259 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:06.259 list of memzone associated elements. size: 602.262573 MiB 00:04:06.259 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:06.259 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:06.259 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:06.259 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:06.259 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:06.259 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3248174_0 00:04:06.259 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:06.259 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3248174_0 00:04:06.259 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:06.260 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3248174_0 00:04:06.260 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:06.260 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:06.260 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:06.260 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:06.260 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:06.260 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3248174 00:04:06.260 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:06.260 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3248174 00:04:06.260 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:06.260 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3248174 00:04:06.260 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:06.260 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:06.260 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:06.260 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:06.260 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:06.260 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:06.260 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:06.260 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:06.260 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:06.260 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3248174 00:04:06.260 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:06.260 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3248174 00:04:06.260 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:06.260 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3248174 00:04:06.260 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:06.260 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3248174 00:04:06.260 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:06.260 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3248174 00:04:06.260 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:06.260 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:06.260 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:06.260 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:06.260 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:06.260 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:06.260 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:06.260 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3248174 00:04:06.260 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:06.260 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:06.260 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:06.260 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:06.260 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:06.260 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3248174 00:04:06.260 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:06.260 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:06.260 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:06.260 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3248174 00:04:06.260 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:06.260 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3248174 00:04:06.260 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:06.260 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:06.260 04:04:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:06.260 04:04:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3248174 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3248174 ']' 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3248174 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3248174 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3248174' 00:04:06.260 killing process with pid 3248174 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3248174 00:04:06.260 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3248174 00:04:06.518 00:04:06.518 real 0m1.153s 00:04:06.518 user 0m1.132s 00:04:06.518 sys 0m0.422s 00:04:06.518 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.518 04:04:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.518 ************************************ 00:04:06.518 END TEST dpdk_mem_utility 00:04:06.518 ************************************ 00:04:06.777 04:04:54 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:06.777 04:04:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.777 04:04:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.777 04:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:06.777 ************************************ 00:04:06.777 START TEST event 00:04:06.777 ************************************ 00:04:06.777 04:04:54 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:06.777 * Looking for test storage... 00:04:06.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:06.777 04:04:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:06.777 04:04:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:06.777 04:04:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.777 04:04:54 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:06.777 04:04:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.777 04:04:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.777 ************************************ 00:04:06.777 START TEST event_perf 00:04:06.777 ************************************ 00:04:06.777 04:04:54 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.777 Running I/O for 1 seconds...[2024-05-15 04:04:54.682827] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:06.777 [2024-05-15 04:04:54.682894] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248370 ] 00:04:06.777 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.777 [2024-05-15 04:04:54.754464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.035 [2024-05-15 04:04:54.869566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.035 [2024-05-15 04:04:54.869634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.035 [2024-05-15 04:04:54.869729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.035 [2024-05-15 04:04:54.869732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.412 Running I/O for 1 seconds... 00:04:08.412 lcore 0: 231272 00:04:08.412 lcore 1: 231271 00:04:08.412 lcore 2: 231272 00:04:08.412 lcore 3: 231271 00:04:08.412 done. 00:04:08.412 00:04:08.412 real 0m1.326s 00:04:08.412 user 0m4.221s 00:04:08.412 sys 0m0.100s 00:04:08.412 04:04:55 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.412 04:04:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:08.412 ************************************ 00:04:08.412 END TEST event_perf 00:04:08.412 ************************************ 00:04:08.412 04:04:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:08.412 04:04:56 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:08.412 04:04:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.412 04:04:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:08.412 ************************************ 00:04:08.412 START TEST event_reactor 00:04:08.412 ************************************ 00:04:08.412 04:04:56 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:08.412 [2024-05-15 04:04:56.063922] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:08.412 [2024-05-15 04:04:56.064028] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248528 ] 00:04:08.412 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.412 [2024-05-15 04:04:56.139724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.412 [2024-05-15 04:04:56.256402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.786 test_start 00:04:09.786 oneshot 00:04:09.786 tick 100 00:04:09.786 tick 100 00:04:09.786 tick 250 00:04:09.786 tick 100 00:04:09.786 tick 100 00:04:09.786 tick 250 00:04:09.786 tick 500 00:04:09.786 tick 100 00:04:09.786 tick 100 00:04:09.786 tick 100 00:04:09.786 tick 250 00:04:09.786 tick 100 00:04:09.786 tick 100 00:04:09.786 test_end 00:04:09.786 00:04:09.786 real 0m1.332s 00:04:09.786 user 0m1.232s 00:04:09.787 sys 0m0.095s 00:04:09.787 04:04:57 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.787 04:04:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:09.787 ************************************ 00:04:09.787 END TEST event_reactor 00:04:09.787 ************************************ 00:04:09.787 04:04:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.787 04:04:57 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:09.787 04:04:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.787 04:04:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.787 ************************************ 00:04:09.787 START TEST event_reactor_perf 00:04:09.787 ************************************ 00:04:09.787 04:04:57 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.787 [2024-05-15 04:04:57.450855] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:09.787 [2024-05-15 04:04:57.450925] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248682 ] 00:04:09.787 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.787 [2024-05-15 04:04:57.526813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.787 [2024-05-15 04:04:57.642941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.161 test_start 00:04:11.161 test_end 00:04:11.161 Performance: 356192 events per second 00:04:11.161 00:04:11.161 real 0m1.327s 00:04:11.161 user 0m1.229s 00:04:11.161 sys 0m0.093s 00:04:11.161 04:04:58 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.161 04:04:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.161 ************************************ 00:04:11.161 END TEST event_reactor_perf 00:04:11.161 ************************************ 00:04:11.161 04:04:58 event -- event/event.sh@49 -- # uname -s 00:04:11.161 04:04:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:11.161 04:04:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.161 04:04:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.161 04:04:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.161 04:04:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.161 ************************************ 00:04:11.161 START TEST event_scheduler 00:04:11.161 ************************************ 00:04:11.161 04:04:58 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.161 * Looking for test storage... 00:04:11.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:11.161 04:04:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:11.161 04:04:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3248938 00:04:11.161 04:04:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:11.161 04:04:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.161 04:04:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3248938 00:04:11.161 04:04:58 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3248938 ']' 00:04:11.161 04:04:58 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.161 04:04:58 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:11.161 04:04:58 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.161 04:04:58 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:11.161 04:04:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.161 [2024-05-15 04:04:58.909125] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:11.161 [2024-05-15 04:04:58.909215] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248938 ] 00:04:11.161 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.161 [2024-05-15 04:04:58.978509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.161 [2024-05-15 04:04:59.087806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.161 [2024-05-15 04:04:59.087870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.161 [2024-05-15 04:04:59.087947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.161 [2024-05-15 04:04:59.087951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.161 04:04:59 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:11.161 04:04:59 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:11.161 04:04:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:11.161 04:04:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.161 04:04:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.161 POWER: Env isn't set yet! 00:04:11.161 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:11.161 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:11.161 POWER: Cannot get available frequencies of lcore 0 00:04:11.161 POWER: Attempting to initialise PSTAT power management... 00:04:11.161 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:11.161 POWER: Initialized successfully for lcore 0 power management 00:04:11.161 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:11.161 POWER: Initialized successfully for lcore 1 power management 00:04:11.161 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:11.161 POWER: Initialized successfully for lcore 2 power management 00:04:11.161 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:11.161 POWER: Initialized successfully for lcore 3 power management 00:04:11.161 04:04:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.161 04:04:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:11.161 04:04:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.161 04:04:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 [2024-05-15 04:04:59.248858] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:11.420 04:04:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:11.420 04:04:59 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.420 04:04:59 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 ************************************ 00:04:11.420 START TEST scheduler_create_thread 00:04:11.420 ************************************ 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 2 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 3 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 4 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 5 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 6 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 7 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 8 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 9 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 10 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.420 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.986 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.986 04:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:11.986 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.986 04:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.358 04:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.358 04:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:13.358 04:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:13.358 04:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.358 04:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.388 04:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.388 00:04:14.388 real 0m3.097s 00:04:14.388 user 0m0.009s 00:04:14.388 sys 0m0.004s 00:04:14.388 04:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.388 04:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.388 ************************************ 00:04:14.388 END TEST scheduler_create_thread 00:04:14.388 ************************************ 00:04:14.388 04:05:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:14.388 04:05:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3248938 00:04:14.388 04:05:02 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3248938 ']' 00:04:14.388 04:05:02 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3248938 00:04:14.388 04:05:02 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:14.388 04:05:02 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:14.644 04:05:02 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3248938 00:04:14.644 04:05:02 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:14.644 04:05:02 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:14.644 04:05:02 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3248938' 00:04:14.644 killing process with pid 3248938 00:04:14.644 04:05:02 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3248938 00:04:14.644 04:05:02 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3248938 00:04:14.902 [2024-05-15 04:05:02.757302] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:15.161 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:15.161 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:15.161 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:15.161 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:15.161 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:15.161 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:15.161 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:15.161 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:15.161 00:04:15.161 real 0m4.231s 00:04:15.161 user 0m6.833s 00:04:15.161 sys 0m0.328s 00:04:15.161 04:05:03 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.161 04:05:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.161 ************************************ 00:04:15.161 END TEST event_scheduler 00:04:15.161 ************************************ 00:04:15.161 04:05:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:15.161 04:05:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:15.161 04:05:03 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.161 04:05:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.161 04:05:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.161 ************************************ 00:04:15.161 START TEST app_repeat 00:04:15.161 ************************************ 00:04:15.161 04:05:03 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3249450 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3249450' 00:04:15.161 Process app_repeat pid: 3249450 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:15.161 spdk_app_start Round 0 00:04:15.161 04:05:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3249450 /var/tmp/spdk-nbd.sock 00:04:15.161 04:05:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3249450 ']' 00:04:15.161 04:05:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:15.161 04:05:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:15.161 04:05:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:15.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:15.161 04:05:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:15.161 04:05:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:15.161 [2024-05-15 04:05:03.138447] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:15.161 [2024-05-15 04:05:03.138506] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249450 ] 00:04:15.161 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.420 [2024-05-15 04:05:03.213062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.420 [2024-05-15 04:05:03.329283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.420 [2024-05-15 04:05:03.329288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.677 04:05:03 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:15.677 04:05:03 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:15.677 04:05:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:15.677 Malloc0 00:04:15.935 04:05:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.194 Malloc1 00:04:16.194 04:05:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.194 04:05:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.452 /dev/nbd0 00:04:16.452 04:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.452 04:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.453 1+0 records in 00:04:16.453 1+0 records out 00:04:16.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139771 s, 29.3 MB/s 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:16.453 04:05:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:16.453 04:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.453 04:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.453 04:05:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:16.711 /dev/nbd1 00:04:16.711 04:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:16.711 04:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.711 1+0 records in 00:04:16.711 1+0 records out 00:04:16.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222412 s, 18.4 MB/s 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:16.711 04:05:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:16.711 04:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.711 04:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.711 04:05:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.711 04:05:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.711 04:05:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:16.969 { 00:04:16.969 "nbd_device": "/dev/nbd0", 00:04:16.969 "bdev_name": "Malloc0" 00:04:16.969 }, 00:04:16.969 { 00:04:16.969 "nbd_device": "/dev/nbd1", 00:04:16.969 "bdev_name": "Malloc1" 00:04:16.969 } 00:04:16.969 ]' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:16.969 { 00:04:16.969 "nbd_device": "/dev/nbd0", 00:04:16.969 "bdev_name": "Malloc0" 00:04:16.969 }, 00:04:16.969 { 00:04:16.969 "nbd_device": "/dev/nbd1", 00:04:16.969 "bdev_name": "Malloc1" 00:04:16.969 } 00:04:16.969 ]' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:16.969 /dev/nbd1' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:16.969 /dev/nbd1' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:16.969 256+0 records in 00:04:16.969 256+0 records out 00:04:16.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502126 s, 209 MB/s 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:16.969 256+0 records in 00:04:16.969 256+0 records out 00:04:16.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240972 s, 43.5 MB/s 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:16.969 256+0 records in 00:04:16.969 256+0 records out 00:04:16.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260386 s, 40.3 MB/s 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:16.969 04:05:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.226 04:05:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.483 04:05:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:17.742 04:05:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:17.742 04:05:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.000 04:05:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:18.258 [2024-05-15 04:05:06.239450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.516 [2024-05-15 04:05:06.356019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.516 [2024-05-15 04:05:06.356019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.516 [2024-05-15 04:05:06.417874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:18.516 [2024-05-15 04:05:06.417983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.055 04:05:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:21.055 04:05:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:21.055 spdk_app_start Round 1 00:04:21.055 04:05:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3249450 /var/tmp/spdk-nbd.sock 00:04:21.055 04:05:08 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3249450 ']' 00:04:21.055 04:05:08 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.055 04:05:08 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:21.055 04:05:08 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.055 04:05:08 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:21.055 04:05:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:21.314 04:05:09 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:21.314 04:05:09 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:21.314 04:05:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.573 Malloc0 00:04:21.573 04:05:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.831 Malloc1 00:04:21.831 04:05:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.831 04:05:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.831 04:05:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.832 04:05:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:22.090 /dev/nbd0 00:04:22.090 04:05:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:22.090 04:05:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.090 1+0 records in 00:04:22.090 1+0 records out 00:04:22.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000135116 s, 30.3 MB/s 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:22.090 04:05:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:22.090 04:05:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.090 04:05:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.090 04:05:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:22.348 /dev/nbd1 00:04:22.348 04:05:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:22.348 04:05:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.348 1+0 records in 00:04:22.348 1+0 records out 00:04:22.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180061 s, 22.7 MB/s 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:22.348 04:05:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:22.348 04:05:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.348 04:05:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.348 04:05:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.348 04:05:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.348 04:05:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:22.607 { 00:04:22.607 "nbd_device": "/dev/nbd0", 00:04:22.607 "bdev_name": "Malloc0" 00:04:22.607 }, 00:04:22.607 { 00:04:22.607 "nbd_device": "/dev/nbd1", 00:04:22.607 "bdev_name": "Malloc1" 00:04:22.607 } 00:04:22.607 ]' 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:22.607 { 00:04:22.607 "nbd_device": "/dev/nbd0", 00:04:22.607 "bdev_name": "Malloc0" 00:04:22.607 }, 00:04:22.607 { 00:04:22.607 "nbd_device": "/dev/nbd1", 00:04:22.607 "bdev_name": "Malloc1" 00:04:22.607 } 00:04:22.607 ]' 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:22.607 /dev/nbd1' 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:22.607 /dev/nbd1' 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:22.607 256+0 records in 00:04:22.607 256+0 records out 00:04:22.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508769 s, 206 MB/s 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:22.607 256+0 records in 00:04:22.607 256+0 records out 00:04:22.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236691 s, 44.3 MB/s 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.607 04:05:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:22.865 256+0 records in 00:04:22.865 256+0 records out 00:04:22.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257333 s, 40.7 MB/s 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.865 04:05:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.123 04:05:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.123 04:05:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.380 04:05:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:23.380 04:05:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:23.380 04:05:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:23.638 04:05:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:23.638 04:05:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:23.896 04:05:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.154 [2024-05-15 04:05:11.960054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.154 [2024-05-15 04:05:12.076070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.154 [2024-05-15 04:05:12.076075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.154 [2024-05-15 04:05:12.138687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:24.154 [2024-05-15 04:05:12.138771] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:26.681 04:05:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:26.681 04:05:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:26.681 spdk_app_start Round 2 00:04:26.681 04:05:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3249450 /var/tmp/spdk-nbd.sock 00:04:26.681 04:05:14 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3249450 ']' 00:04:26.681 04:05:14 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:26.681 04:05:14 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:26.681 04:05:14 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:26.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:26.681 04:05:14 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:26.681 04:05:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.939 04:05:14 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:26.939 04:05:14 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:26.939 04:05:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.198 Malloc0 00:04:27.198 04:05:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.456 Malloc1 00:04:27.456 04:05:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.456 04:05:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:27.715 /dev/nbd0 00:04:27.715 04:05:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:27.715 04:05:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.715 1+0 records in 00:04:27.715 1+0 records out 00:04:27.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189834 s, 21.6 MB/s 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:27.715 04:05:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:27.715 04:05:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.715 04:05:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.715 04:05:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:27.974 /dev/nbd1 00:04:27.974 04:05:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:27.974 04:05:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.974 1+0 records in 00:04:27.974 1+0 records out 00:04:27.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207206 s, 19.8 MB/s 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:27.974 04:05:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:27.974 04:05:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.974 04:05:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.974 04:05:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.974 04:05:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.974 04:05:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.238 04:05:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:28.238 { 00:04:28.238 "nbd_device": "/dev/nbd0", 00:04:28.238 "bdev_name": "Malloc0" 00:04:28.238 }, 00:04:28.238 { 00:04:28.238 "nbd_device": "/dev/nbd1", 00:04:28.238 "bdev_name": "Malloc1" 00:04:28.238 } 00:04:28.238 ]' 00:04:28.238 04:05:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:28.238 { 00:04:28.238 "nbd_device": "/dev/nbd0", 00:04:28.238 "bdev_name": "Malloc0" 00:04:28.238 }, 00:04:28.238 { 00:04:28.238 "nbd_device": "/dev/nbd1", 00:04:28.238 "bdev_name": "Malloc1" 00:04:28.238 } 00:04:28.238 ]' 00:04:28.238 04:05:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.238 04:05:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:28.238 /dev/nbd1' 00:04:28.238 04:05:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:28.238 /dev/nbd1' 00:04:28.238 04:05:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:28.497 256+0 records in 00:04:28.497 256+0 records out 00:04:28.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503414 s, 208 MB/s 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:28.497 256+0 records in 00:04:28.497 256+0 records out 00:04:28.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243119 s, 43.1 MB/s 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:28.497 256+0 records in 00:04:28.497 256+0 records out 00:04:28.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233244 s, 45.0 MB/s 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.497 04:05:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.755 04:05:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.014 04:05:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:29.272 04:05:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:29.272 04:05:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:29.531 04:05:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:29.789 [2024-05-15 04:05:17.672983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.789 [2024-05-15 04:05:17.788898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.789 [2024-05-15 04:05:17.788898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.048 [2024-05-15 04:05:17.850790] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:30.048 [2024-05-15 04:05:17.850860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:32.578 04:05:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3249450 /var/tmp/spdk-nbd.sock 00:04:32.578 04:05:20 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3249450 ']' 00:04:32.578 04:05:20 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.578 04:05:20 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:32.578 04:05:20 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.578 04:05:20 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:32.578 04:05:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:32.837 04:05:20 event.app_repeat -- event/event.sh@39 -- # killprocess 3249450 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3249450 ']' 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3249450 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3249450 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3249450' 00:04:32.837 killing process with pid 3249450 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3249450 00:04:32.837 04:05:20 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3249450 00:04:33.096 spdk_app_start is called in Round 0. 00:04:33.096 Shutdown signal received, stop current app iteration 00:04:33.096 Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 reinitialization... 00:04:33.096 spdk_app_start is called in Round 1. 00:04:33.096 Shutdown signal received, stop current app iteration 00:04:33.096 Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 reinitialization... 00:04:33.096 spdk_app_start is called in Round 2. 00:04:33.096 Shutdown signal received, stop current app iteration 00:04:33.096 Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 reinitialization... 00:04:33.096 spdk_app_start is called in Round 3. 00:04:33.096 Shutdown signal received, stop current app iteration 00:04:33.096 04:05:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:33.096 04:05:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:33.096 00:04:33.096 real 0m17.816s 00:04:33.096 user 0m38.859s 00:04:33.096 sys 0m3.310s 00:04:33.096 04:05:20 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.096 04:05:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.096 ************************************ 00:04:33.096 END TEST app_repeat 00:04:33.096 ************************************ 00:04:33.096 04:05:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:33.096 04:05:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:33.096 04:05:20 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.096 04:05:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.096 04:05:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.096 ************************************ 00:04:33.096 START TEST cpu_locks 00:04:33.096 ************************************ 00:04:33.096 04:05:20 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:33.096 * Looking for test storage... 00:04:33.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:33.096 04:05:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:33.096 04:05:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:33.096 04:05:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:33.096 04:05:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:33.096 04:05:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.096 04:05:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.096 04:05:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.096 ************************************ 00:04:33.096 START TEST default_locks 00:04:33.096 ************************************ 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3251803 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3251803 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3251803 ']' 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:33.096 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.355 [2024-05-15 04:05:21.119807] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:33.355 [2024-05-15 04:05:21.119888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251803 ] 00:04:33.355 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.355 [2024-05-15 04:05:21.186095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.355 [2024-05-15 04:05:21.292626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.614 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:33.614 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:04:33.614 04:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3251803 00:04:33.614 04:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3251803 00:04:33.614 04:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.199 lslocks: write error 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3251803 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3251803 ']' 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3251803 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3251803 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3251803' 00:04:34.200 killing process with pid 3251803 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3251803 00:04:34.200 04:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3251803 00:04:34.463 04:05:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3251803 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3251803 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3251803 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3251803 ']' 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3251803) - No such process 00:04:34.464 ERROR: process (pid: 3251803) is no longer running 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:34.464 00:04:34.464 real 0m1.385s 00:04:34.464 user 0m1.313s 00:04:34.464 sys 0m0.556s 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.464 04:05:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.464 ************************************ 00:04:34.464 END TEST default_locks 00:04:34.464 ************************************ 00:04:34.722 04:05:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:34.722 04:05:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.722 04:05:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.722 04:05:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.722 ************************************ 00:04:34.722 START TEST default_locks_via_rpc 00:04:34.722 ************************************ 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3252085 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3252085 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3252085 ']' 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:34.723 04:05:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.723 [2024-05-15 04:05:22.558534] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:34.723 [2024-05-15 04:05:22.558611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252085 ] 00:04:34.723 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.723 [2024-05-15 04:05:22.627031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.981 [2024-05-15 04:05:22.742937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3252085 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3252085 00:04:35.551 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.809 04:05:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3252085 00:04:35.809 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3252085 ']' 00:04:35.809 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3252085 00:04:35.809 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:04:35.809 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:35.809 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3252085 00:04:36.067 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:36.067 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:36.067 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3252085' 00:04:36.067 killing process with pid 3252085 00:04:36.067 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3252085 00:04:36.067 04:05:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3252085 00:04:36.324 00:04:36.324 real 0m1.776s 00:04:36.324 user 0m1.889s 00:04:36.324 sys 0m0.585s 00:04:36.324 04:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.324 04:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.324 ************************************ 00:04:36.324 END TEST default_locks_via_rpc 00:04:36.324 ************************************ 00:04:36.324 04:05:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:36.324 04:05:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.324 04:05:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.324 04:05:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.324 ************************************ 00:04:36.324 START TEST non_locking_app_on_locked_coremask 00:04:36.324 ************************************ 00:04:36.324 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3252256 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3252256 /var/tmp/spdk.sock 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3252256 ']' 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:36.582 04:05:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.582 [2024-05-15 04:05:24.388549] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:36.582 [2024-05-15 04:05:24.388629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252256 ] 00:04:36.582 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.582 [2024-05-15 04:05:24.460672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.582 [2024-05-15 04:05:24.577151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3252392 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3252392 /var/tmp/spdk2.sock 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3252392 ']' 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:37.514 04:05:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.514 [2024-05-15 04:05:25.380601] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:37.514 [2024-05-15 04:05:25.380689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252392 ] 00:04:37.514 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.514 [2024-05-15 04:05:25.490499] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.514 [2024-05-15 04:05:25.490532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.772 [2024-05-15 04:05:25.723973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.337 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:38.337 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:38.337 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3252256 00:04:38.337 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3252256 00:04:38.337 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.901 lslocks: write error 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3252256 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3252256 ']' 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3252256 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3252256 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3252256' 00:04:38.901 killing process with pid 3252256 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3252256 00:04:38.901 04:05:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3252256 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3252392 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3252392 ']' 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3252392 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3252392 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3252392' 00:04:39.833 killing process with pid 3252392 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3252392 00:04:39.833 04:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3252392 00:04:40.398 00:04:40.398 real 0m3.796s 00:04:40.398 user 0m4.104s 00:04:40.398 sys 0m1.070s 00:04:40.398 04:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.398 04:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.398 ************************************ 00:04:40.398 END TEST non_locking_app_on_locked_coremask 00:04:40.398 ************************************ 00:04:40.398 04:05:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:40.398 04:05:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.398 04:05:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.398 04:05:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.398 ************************************ 00:04:40.398 START TEST locking_app_on_unlocked_coremask 00:04:40.398 ************************************ 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3252822 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3252822 /var/tmp/spdk.sock 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3252822 ']' 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:40.398 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.398 [2024-05-15 04:05:28.227044] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:40.398 [2024-05-15 04:05:28.227114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252822 ] 00:04:40.398 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.398 [2024-05-15 04:05:28.294287] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.398 [2024-05-15 04:05:28.294327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.398 [2024-05-15 04:05:28.405563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.656 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:40.656 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:40.656 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3252826 00:04:40.656 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.656 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3252826 /var/tmp/spdk2.sock 00:04:40.656 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3252826 ']' 00:04:40.656 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.657 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:40.657 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.657 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:40.657 04:05:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.914 [2024-05-15 04:05:28.717226] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:40.914 [2024-05-15 04:05:28.717309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252826 ] 00:04:40.914 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.914 [2024-05-15 04:05:28.828924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.172 [2024-05-15 04:05:29.068654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.738 04:05:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.738 04:05:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:41.738 04:05:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3252826 00:04:41.738 04:05:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3252826 00:04:41.738 04:05:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.308 lslocks: write error 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3252822 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3252822 ']' 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3252822 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3252822 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3252822' 00:04:42.308 killing process with pid 3252822 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3252822 00:04:42.308 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3252822 00:04:43.277 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3252826 00:04:43.277 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3252826 ']' 00:04:43.277 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3252826 00:04:43.277 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:43.277 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:43.277 04:05:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3252826 00:04:43.277 04:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:43.277 04:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:43.277 04:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3252826' 00:04:43.277 killing process with pid 3252826 00:04:43.277 04:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3252826 00:04:43.278 04:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3252826 00:04:43.535 00:04:43.535 real 0m3.293s 00:04:43.535 user 0m3.393s 00:04:43.535 sys 0m1.053s 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.535 ************************************ 00:04:43.535 END TEST locking_app_on_unlocked_coremask 00:04:43.535 ************************************ 00:04:43.535 04:05:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:43.535 04:05:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.535 04:05:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.535 04:05:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.535 ************************************ 00:04:43.535 START TEST locking_app_on_locked_coremask 00:04:43.535 ************************************ 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3253223 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3253223 /var/tmp/spdk.sock 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3253223 ']' 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:43.535 04:05:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.793 [2024-05-15 04:05:31.576726] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:43.793 [2024-05-15 04:05:31.576811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253223 ] 00:04:43.793 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.793 [2024-05-15 04:05:31.645012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.793 [2024-05-15 04:05:31.760430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3253277 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3253277 /var/tmp/spdk2.sock 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3253277 /var/tmp/spdk2.sock 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3253277 /var/tmp/spdk2.sock 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3253277 ']' 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.727 04:05:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.727 [2024-05-15 04:05:32.548961] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:44.727 [2024-05-15 04:05:32.549056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253277 ] 00:04:44.727 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.727 [2024-05-15 04:05:32.669765] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3253223 has claimed it. 00:04:44.727 [2024-05-15 04:05:32.669826] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3253277) - No such process 00:04:45.293 ERROR: process (pid: 3253277) is no longer running 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3253223 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3253223 00:04:45.293 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.552 lslocks: write error 00:04:45.552 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3253223 00:04:45.552 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3253223 ']' 00:04:45.552 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3253223 00:04:45.552 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:45.552 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.552 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3253223 00:04:45.810 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:45.810 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:45.810 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3253223' 00:04:45.810 killing process with pid 3253223 00:04:45.810 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3253223 00:04:45.810 04:05:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3253223 00:04:46.068 00:04:46.068 real 0m2.499s 00:04:46.068 user 0m2.826s 00:04:46.068 sys 0m0.708s 00:04:46.068 04:05:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.068 04:05:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.068 ************************************ 00:04:46.068 END TEST locking_app_on_locked_coremask 00:04:46.068 ************************************ 00:04:46.068 04:05:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:46.068 04:05:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.068 04:05:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.068 04:05:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.068 ************************************ 00:04:46.068 START TEST locking_overlapped_coremask 00:04:46.068 ************************************ 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3253566 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3253566 /var/tmp/spdk.sock 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3253566 ']' 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:46.068 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.326 [2024-05-15 04:05:34.129045] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:46.327 [2024-05-15 04:05:34.129125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253566 ] 00:04:46.327 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.327 [2024-05-15 04:05:34.196127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.327 [2024-05-15 04:05:34.306952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.327 [2024-05-15 04:05:34.307009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.327 [2024-05-15 04:05:34.307012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3253571 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3253571 /var/tmp/spdk2.sock 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3253571 /var/tmp/spdk2.sock 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3253571 /var/tmp/spdk2.sock 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3253571 ']' 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:46.585 04:05:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.843 [2024-05-15 04:05:34.616460] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:46.843 [2024-05-15 04:05:34.616542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253571 ] 00:04:46.843 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.843 [2024-05-15 04:05:34.715879] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3253566 has claimed it. 00:04:46.843 [2024-05-15 04:05:34.715946] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3253571) - No such process 00:04:47.409 ERROR: process (pid: 3253571) is no longer running 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3253566 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3253566 ']' 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3253566 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3253566 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3253566' 00:04:47.409 killing process with pid 3253566 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3253566 00:04:47.409 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3253566 00:04:47.976 00:04:47.976 real 0m1.716s 00:04:47.976 user 0m4.507s 00:04:47.976 sys 0m0.466s 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.976 ************************************ 00:04:47.976 END TEST locking_overlapped_coremask 00:04:47.976 ************************************ 00:04:47.976 04:05:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:47.976 04:05:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.976 04:05:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.976 04:05:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.976 ************************************ 00:04:47.976 START TEST locking_overlapped_coremask_via_rpc 00:04:47.976 ************************************ 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3253735 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3253735 /var/tmp/spdk.sock 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3253735 ']' 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:47.976 04:05:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.976 [2024-05-15 04:05:35.896683] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:47.976 [2024-05-15 04:05:35.896773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253735 ] 00:04:47.976 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.976 [2024-05-15 04:05:35.966409] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.976 [2024-05-15 04:05:35.966442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.242 [2024-05-15 04:05:36.080056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.242 [2024-05-15 04:05:36.083949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.242 [2024-05-15 04:05:36.083960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3253871 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3253871 /var/tmp/spdk2.sock 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3253871 ']' 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:48.503 04:05:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.503 [2024-05-15 04:05:36.382611] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:48.503 [2024-05-15 04:05:36.382697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253871 ] 00:04:48.503 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.503 [2024-05-15 04:05:36.484091] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.504 [2024-05-15 04:05:36.484123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.762 [2024-05-15 04:05:36.709165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.762 [2024-05-15 04:05:36.712987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:48.762 [2024-05-15 04:05:36.712990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.329 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.329 [2024-05-15 04:05:37.340035] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3253735 has claimed it. 00:04:49.587 request: 00:04:49.587 { 00:04:49.587 "method": "framework_enable_cpumask_locks", 00:04:49.587 "req_id": 1 00:04:49.587 } 00:04:49.587 Got JSON-RPC error response 00:04:49.587 response: 00:04:49.587 { 00:04:49.587 "code": -32603, 00:04:49.587 "message": "Failed to claim CPU core: 2" 00:04:49.587 } 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3253735 /var/tmp/spdk.sock 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3253735 ']' 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3253871 /var/tmp/spdk2.sock 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3253871 ']' 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.587 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.588 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.588 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.588 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:49.845 00:04:49.845 real 0m1.982s 00:04:49.845 user 0m1.039s 00:04:49.845 sys 0m0.159s 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.845 04:05:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.845 ************************************ 00:04:49.845 END TEST locking_overlapped_coremask_via_rpc 00:04:49.845 ************************************ 00:04:49.845 04:05:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:49.846 04:05:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3253735 ]] 00:04:49.846 04:05:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3253735 00:04:49.846 04:05:37 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3253735 ']' 00:04:49.846 04:05:37 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3253735 00:04:49.846 04:05:37 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:04:49.846 04:05:37 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:49.846 04:05:37 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3253735 00:04:50.104 04:05:37 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:50.104 04:05:37 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:50.104 04:05:37 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3253735' 00:04:50.104 killing process with pid 3253735 00:04:50.104 04:05:37 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3253735 00:04:50.104 04:05:37 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3253735 00:04:50.362 04:05:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3253871 ]] 00:04:50.362 04:05:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3253871 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3253871 ']' 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3253871 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3253871 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3253871' 00:04:50.362 killing process with pid 3253871 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3253871 00:04:50.362 04:05:38 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3253871 00:04:50.930 04:05:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:50.930 04:05:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:50.930 04:05:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3253735 ]] 00:04:50.930 04:05:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3253735 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3253735 ']' 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3253735 00:04:50.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3253735) - No such process 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3253735 is not found' 00:04:50.930 Process with pid 3253735 is not found 00:04:50.930 04:05:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3253871 ]] 00:04:50.930 04:05:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3253871 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3253871 ']' 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3253871 00:04:50.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3253871) - No such process 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3253871 is not found' 00:04:50.930 Process with pid 3253871 is not found 00:04:50.930 04:05:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:50.930 00:04:50.930 real 0m17.827s 00:04:50.930 user 0m29.927s 00:04:50.930 sys 0m5.532s 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.930 04:05:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.930 ************************************ 00:04:50.930 END TEST cpu_locks 00:04:50.930 ************************************ 00:04:50.930 00:04:50.930 real 0m44.251s 00:04:50.930 user 1m22.441s 00:04:50.930 sys 0m9.722s 00:04:50.930 04:05:38 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.930 04:05:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.930 ************************************ 00:04:50.930 END TEST event 00:04:50.930 ************************************ 00:04:50.930 04:05:38 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:50.930 04:05:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.930 04:05:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.930 04:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:50.930 ************************************ 00:04:50.930 START TEST thread 00:04:50.930 ************************************ 00:04:50.930 04:05:38 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:50.930 * Looking for test storage... 00:04:51.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:51.188 04:05:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:51.189 04:05:38 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:04:51.189 04:05:38 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.189 04:05:38 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.189 ************************************ 00:04:51.189 START TEST thread_poller_perf 00:04:51.189 ************************************ 00:04:51.189 04:05:38 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:51.189 [2024-05-15 04:05:38.991021] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:51.189 [2024-05-15 04:05:38.991081] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254234 ] 00:04:51.189 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.189 [2024-05-15 04:05:39.068641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.189 [2024-05-15 04:05:39.185000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.189 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:52.562 ====================================== 00:04:52.562 busy:2708444495 (cyc) 00:04:52.562 total_run_count: 296000 00:04:52.562 tsc_hz: 2700000000 (cyc) 00:04:52.562 ====================================== 00:04:52.562 poller_cost: 9150 (cyc), 3388 (nsec) 00:04:52.562 00:04:52.562 real 0m1.338s 00:04:52.562 user 0m1.242s 00:04:52.562 sys 0m0.090s 00:04:52.562 04:05:40 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.562 04:05:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.562 ************************************ 00:04:52.562 END TEST thread_poller_perf 00:04:52.562 ************************************ 00:04:52.562 04:05:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:52.562 04:05:40 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:04:52.562 04:05:40 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.562 04:05:40 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.562 ************************************ 00:04:52.562 START TEST thread_poller_perf 00:04:52.562 ************************************ 00:04:52.562 04:05:40 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:52.562 [2024-05-15 04:05:40.388212] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:52.562 [2024-05-15 04:05:40.388275] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254392 ] 00:04:52.562 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.562 [2024-05-15 04:05:40.463036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.820 [2024-05-15 04:05:40.584946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.820 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:53.756 ====================================== 00:04:53.756 busy:2703015110 (cyc) 00:04:53.756 total_run_count: 3851000 00:04:53.756 tsc_hz: 2700000000 (cyc) 00:04:53.756 ====================================== 00:04:53.756 poller_cost: 701 (cyc), 259 (nsec) 00:04:53.756 00:04:53.756 real 0m1.332s 00:04:53.756 user 0m1.226s 00:04:53.756 sys 0m0.100s 00:04:53.756 04:05:41 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.756 04:05:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.756 ************************************ 00:04:53.756 END TEST thread_poller_perf 00:04:53.756 ************************************ 00:04:53.756 04:05:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:53.756 00:04:53.756 real 0m2.833s 00:04:53.756 user 0m2.527s 00:04:53.756 sys 0m0.301s 00:04:53.756 04:05:41 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.756 04:05:41 thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.756 ************************************ 00:04:53.756 END TEST thread 00:04:53.756 ************************************ 00:04:53.756 04:05:41 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:53.756 04:05:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.756 04:05:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.756 04:05:41 -- common/autotest_common.sh@10 -- # set +x 00:04:54.017 ************************************ 00:04:54.017 START TEST accel 00:04:54.017 ************************************ 00:04:54.017 04:05:41 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:54.017 * Looking for test storage... 00:04:54.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:54.017 04:05:41 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:54.017 04:05:41 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:54.017 04:05:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.017 04:05:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3254709 00:04:54.017 04:05:41 accel -- accel/accel.sh@63 -- # waitforlisten 3254709 00:04:54.017 04:05:41 accel -- common/autotest_common.sh@827 -- # '[' -z 3254709 ']' 00:04:54.017 04:05:41 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.017 04:05:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:54.017 04:05:41 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.017 04:05:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:54.017 04:05:41 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.017 04:05:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.017 04:05:41 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.017 04:05:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.017 04:05:41 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.017 04:05:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.017 04:05:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.017 04:05:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.017 04:05:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:54.017 04:05:41 accel -- accel/accel.sh@41 -- # jq -r . 00:04:54.017 [2024-05-15 04:05:41.876609] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:54.017 [2024-05-15 04:05:41.876704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254709 ] 00:04:54.017 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.017 [2024-05-15 04:05:41.943616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.276 [2024-05-15 04:05:42.050396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@860 -- # return 0 00:04:54.844 04:05:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:54.844 04:05:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:54.844 04:05:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:54.844 04:05:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:54.844 04:05:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:54.844 04:05:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.844 04:05:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # IFS== 00:04:54.844 04:05:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:54.844 04:05:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:54.844 04:05:42 accel -- accel/accel.sh@75 -- # killprocess 3254709 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@946 -- # '[' -z 3254709 ']' 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@950 -- # kill -0 3254709 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@951 -- # uname 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:54.844 04:05:42 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3254709 00:04:55.103 04:05:42 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:55.103 04:05:42 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:55.103 04:05:42 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3254709' 00:04:55.103 killing process with pid 3254709 00:04:55.103 04:05:42 accel -- common/autotest_common.sh@965 -- # kill 3254709 00:04:55.103 04:05:42 accel -- common/autotest_common.sh@970 -- # wait 3254709 00:04:55.362 04:05:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:55.362 04:05:43 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:55.362 04:05:43 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:04:55.362 04:05:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.362 04:05:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.362 04:05:43 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:55.362 04:05:43 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:55.362 04:05:43 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.362 04:05:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:55.621 04:05:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:55.621 04:05:43 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:04:55.621 04:05:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.621 04:05:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.621 ************************************ 00:04:55.621 START TEST accel_missing_filename 00:04:55.621 ************************************ 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.621 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:55.621 04:05:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:55.621 [2024-05-15 04:05:43.446374] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:55.621 [2024-05-15 04:05:43.446439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254886 ] 00:04:55.621 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.621 [2024-05-15 04:05:43.518903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.879 [2024-05-15 04:05:43.639276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.879 [2024-05-15 04:05:43.700966] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.879 [2024-05-15 04:05:43.783350] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:04:56.176 A filename is required. 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.176 00:04:56.176 real 0m0.477s 00:04:56.176 user 0m0.362s 00:04:56.176 sys 0m0.150s 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.176 04:05:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:56.176 ************************************ 00:04:56.176 END TEST accel_missing_filename 00:04:56.176 ************************************ 00:04:56.176 04:05:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.176 04:05:43 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:04:56.177 04:05:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.177 04:05:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.177 ************************************ 00:04:56.177 START TEST accel_compress_verify 00:04:56.177 ************************************ 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.177 04:05:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:56.177 04:05:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:56.177 [2024-05-15 04:05:43.976030] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:56.177 [2024-05-15 04:05:43.976095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255024 ] 00:04:56.177 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.177 [2024-05-15 04:05:44.049754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.177 [2024-05-15 04:05:44.170544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.436 [2024-05-15 04:05:44.226373] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.436 [2024-05-15 04:05:44.311391] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:04:56.436 00:04:56.436 Compression does not support the verify option, aborting. 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.437 00:04:56.437 real 0m0.479s 00:04:56.437 user 0m0.368s 00:04:56.437 sys 0m0.144s 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.437 04:05:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:56.437 ************************************ 00:04:56.437 END TEST accel_compress_verify 00:04:56.437 ************************************ 00:04:56.696 04:05:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:56.696 04:05:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:04:56.696 04:05:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.696 04:05:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.696 ************************************ 00:04:56.696 START TEST accel_wrong_workload 00:04:56.696 ************************************ 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.696 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:56.696 04:05:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:56.696 Unsupported workload type: foobar 00:04:56.696 [2024-05-15 04:05:44.509997] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:56.696 accel_perf options: 00:04:56.696 [-h help message] 00:04:56.696 [-q queue depth per core] 00:04:56.696 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:56.696 [-T number of threads per core 00:04:56.696 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:56.697 [-t time in seconds] 00:04:56.697 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:56.697 [ dif_verify, , dif_generate, dif_generate_copy 00:04:56.697 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:56.697 [-l for compress/decompress workloads, name of uncompressed input file 00:04:56.697 [-S for crc32c workload, use this seed value (default 0) 00:04:56.697 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:56.697 [-f for fill workload, use this BYTE value (default 255) 00:04:56.697 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:56.697 [-y verify result if this switch is on] 00:04:56.697 [-a tasks to allocate per core (default: same value as -q)] 00:04:56.697 Can be used to spread operations across a wider range of memory. 00:04:56.697 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:56.697 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.697 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:56.697 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.697 00:04:56.697 real 0m0.025s 00:04:56.697 user 0m0.014s 00:04:56.697 sys 0m0.011s 00:04:56.697 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.697 04:05:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:56.697 ************************************ 00:04:56.697 END TEST accel_wrong_workload 00:04:56.697 ************************************ 00:04:56.697 Error: writing output failed: Broken pipe 00:04:56.697 04:05:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:56.697 04:05:44 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:04:56.697 04:05:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.697 04:05:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.697 ************************************ 00:04:56.697 START TEST accel_negative_buffers 00:04:56.697 ************************************ 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:56.697 04:05:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:56.697 -x option must be non-negative. 00:04:56.697 [2024-05-15 04:05:44.578417] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:56.697 accel_perf options: 00:04:56.697 [-h help message] 00:04:56.697 [-q queue depth per core] 00:04:56.697 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:56.697 [-T number of threads per core 00:04:56.697 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:56.697 [-t time in seconds] 00:04:56.697 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:56.697 [ dif_verify, , dif_generate, dif_generate_copy 00:04:56.697 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:56.697 [-l for compress/decompress workloads, name of uncompressed input file 00:04:56.697 [-S for crc32c workload, use this seed value (default 0) 00:04:56.697 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:56.697 [-f for fill workload, use this BYTE value (default 255) 00:04:56.697 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:56.697 [-y verify result if this switch is on] 00:04:56.697 [-a tasks to allocate per core (default: same value as -q)] 00:04:56.697 Can be used to spread operations across a wider range of memory. 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.697 00:04:56.697 real 0m0.021s 00:04:56.697 user 0m0.012s 00:04:56.697 sys 0m0.009s 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.697 04:05:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:56.697 ************************************ 00:04:56.697 END TEST accel_negative_buffers 00:04:56.697 ************************************ 00:04:56.697 Error: writing output failed: Broken pipe 00:04:56.697 04:05:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:56.697 04:05:44 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:04:56.697 04:05:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.697 04:05:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.697 ************************************ 00:04:56.697 START TEST accel_crc32c 00:04:56.697 ************************************ 00:04:56.697 04:05:44 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.697 04:05:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.698 04:05:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:56.698 04:05:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:56.698 [2024-05-15 04:05:44.651709] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:56.698 [2024-05-15 04:05:44.651773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255102 ] 00:04:56.698 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.957 [2024-05-15 04:05:44.727613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.957 [2024-05-15 04:05:44.846858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:56.957 04:05:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:58.334 04:05:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.334 00:04:58.334 real 0m1.490s 00:04:58.334 user 0m1.341s 00:04:58.334 sys 0m0.152s 00:04:58.334 04:05:46 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.334 04:05:46 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:58.334 ************************************ 00:04:58.334 END TEST accel_crc32c 00:04:58.334 ************************************ 00:04:58.334 04:05:46 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:58.334 04:05:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:04:58.334 04:05:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.334 04:05:46 accel -- common/autotest_common.sh@10 -- # set +x 00:04:58.334 ************************************ 00:04:58.334 START TEST accel_crc32c_C2 00:04:58.334 ************************************ 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:58.334 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:58.334 [2024-05-15 04:05:46.192174] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:58.334 [2024-05-15 04:05:46.192235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255324 ] 00:04:58.334 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.334 [2024-05-15 04:05:46.267617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.594 [2024-05-15 04:05:46.387138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.594 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.595 04:05:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.971 00:04:59.971 real 0m1.473s 00:04:59.971 user 0m1.324s 00:04:59.971 sys 0m0.151s 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.971 04:05:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:59.971 ************************************ 00:04:59.971 END TEST accel_crc32c_C2 00:04:59.971 ************************************ 00:04:59.971 04:05:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:59.971 04:05:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:04:59.971 04:05:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.971 04:05:47 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.971 ************************************ 00:04:59.971 START TEST accel_copy 00:04:59.971 ************************************ 00:04:59.971 04:05:47 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:59.971 [2024-05-15 04:05:47.716748] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:04:59.971 [2024-05-15 04:05:47.716811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255537 ] 00:04:59.971 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.971 [2024-05-15 04:05:47.789323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.971 [2024-05-15 04:05:47.908580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.971 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:59.972 04:05:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.345 04:05:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.345 04:05:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:01.346 04:05:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.346 00:05:01.346 real 0m1.493s 00:05:01.346 user 0m1.343s 00:05:01.346 sys 0m0.152s 00:05:01.346 04:05:49 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.346 04:05:49 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:01.346 ************************************ 00:05:01.346 END TEST accel_copy 00:05:01.346 ************************************ 00:05:01.346 04:05:49 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:01.346 04:05:49 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:01.346 04:05:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.346 04:05:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.346 ************************************ 00:05:01.346 START TEST accel_fill 00:05:01.346 ************************************ 00:05:01.346 04:05:49 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:01.346 04:05:49 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:01.346 [2024-05-15 04:05:49.263712] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:01.346 [2024-05-15 04:05:49.263778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255694 ] 00:05:01.346 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.346 [2024-05-15 04:05:49.338072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.603 [2024-05-15 04:05:49.462566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:01.603 04:05:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:02.976 04:05:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.976 00:05:02.976 real 0m1.499s 00:05:02.976 user 0m1.346s 00:05:02.976 sys 0m0.154s 00:05:02.976 04:05:50 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.976 04:05:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:02.976 ************************************ 00:05:02.976 END TEST accel_fill 00:05:02.976 ************************************ 00:05:02.976 04:05:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:02.976 04:05:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:02.976 04:05:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.976 04:05:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.976 ************************************ 00:05:02.976 START TEST accel_copy_crc32c 00:05:02.976 ************************************ 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:02.976 04:05:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:02.976 [2024-05-15 04:05:50.817434] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:02.976 [2024-05-15 04:05:50.817497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255962 ] 00:05:02.976 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.976 [2024-05-15 04:05:50.895161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.235 [2024-05-15 04:05:51.022885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.235 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.235 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.235 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.236 04:05:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.611 00:05:04.611 real 0m1.500s 00:05:04.611 user 0m1.339s 00:05:04.611 sys 0m0.164s 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.611 04:05:52 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:04.611 ************************************ 00:05:04.611 END TEST accel_copy_crc32c 00:05:04.611 ************************************ 00:05:04.611 04:05:52 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:04.611 04:05:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:04.611 04:05:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.611 04:05:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.611 ************************************ 00:05:04.611 START TEST accel_copy_crc32c_C2 00:05:04.611 ************************************ 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.611 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:04.611 [2024-05-15 04:05:52.370438] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:04.611 [2024-05-15 04:05:52.370502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256131 ] 00:05:04.611 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.611 [2024-05-15 04:05:52.443479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.611 [2024-05-15 04:05:52.566722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:04.870 04:05:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.246 00:05:06.246 real 0m1.500s 00:05:06.246 user 0m1.337s 00:05:06.246 sys 0m0.166s 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.246 04:05:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:06.246 ************************************ 00:05:06.246 END TEST accel_copy_crc32c_C2 00:05:06.246 ************************************ 00:05:06.246 04:05:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:06.246 04:05:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:06.246 04:05:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.246 04:05:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.246 ************************************ 00:05:06.246 START TEST accel_dualcast 00:05:06.246 ************************************ 00:05:06.246 04:05:53 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:06.246 04:05:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:06.246 [2024-05-15 04:05:53.924266] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:06.246 [2024-05-15 04:05:53.924330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256285 ] 00:05:06.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.246 [2024-05-15 04:05:53.997162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.246 [2024-05-15 04:05:54.120006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:06.246 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.247 04:05:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:07.620 04:05:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:07.620 04:05:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:07.620 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:07.620 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:07.620 04:05:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:07.620 04:05:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:07.620 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:07.621 04:05:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.621 00:05:07.621 real 0m1.491s 00:05:07.621 user 0m1.345s 00:05:07.621 sys 0m0.147s 00:05:07.621 04:05:55 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.621 04:05:55 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:07.621 ************************************ 00:05:07.621 END TEST accel_dualcast 00:05:07.621 ************************************ 00:05:07.621 04:05:55 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:07.621 04:05:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:07.621 04:05:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.621 04:05:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.621 ************************************ 00:05:07.621 START TEST accel_compare 00:05:07.621 ************************************ 00:05:07.621 04:05:55 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:07.621 04:05:55 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:07.621 [2024-05-15 04:05:55.465879] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:07.621 [2024-05-15 04:05:55.465952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256559 ] 00:05:07.621 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.621 [2024-05-15 04:05:55.539668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.879 [2024-05-15 04:05:55.664365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:07.879 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:07.880 04:05:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:09.254 04:05:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.254 00:05:09.254 real 0m1.494s 00:05:09.254 user 0m1.346s 00:05:09.254 sys 0m0.149s 00:05:09.254 04:05:56 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.254 04:05:56 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:09.254 ************************************ 00:05:09.254 END TEST accel_compare 00:05:09.254 ************************************ 00:05:09.254 04:05:56 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:09.254 04:05:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:09.254 04:05:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.254 04:05:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.254 ************************************ 00:05:09.254 START TEST accel_xor 00:05:09.254 ************************************ 00:05:09.255 04:05:56 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:09.255 04:05:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:09.255 [2024-05-15 04:05:57.014502] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:09.255 [2024-05-15 04:05:57.014566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256720 ] 00:05:09.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.255 [2024-05-15 04:05:57.087283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.255 [2024-05-15 04:05:57.209804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.513 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.514 04:05:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.888 00:05:10.888 real 0m1.499s 00:05:10.888 user 0m1.346s 00:05:10.888 sys 0m0.154s 00:05:10.888 04:05:58 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.888 04:05:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:10.888 ************************************ 00:05:10.888 END TEST accel_xor 00:05:10.888 ************************************ 00:05:10.888 04:05:58 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:10.888 04:05:58 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:10.888 04:05:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.888 04:05:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.888 ************************************ 00:05:10.888 START TEST accel_xor 00:05:10.888 ************************************ 00:05:10.888 04:05:58 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:10.888 [2024-05-15 04:05:58.567560] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:10.888 [2024-05-15 04:05:58.567625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256877 ] 00:05:10.888 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.888 [2024-05-15 04:05:58.640283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.888 [2024-05-15 04:05:58.761577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:10.888 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.889 04:05:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:12.295 04:06:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.295 00:05:12.295 real 0m1.488s 00:05:12.295 user 0m1.342s 00:05:12.295 sys 0m0.148s 00:05:12.295 04:06:00 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.295 04:06:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:12.295 ************************************ 00:05:12.295 END TEST accel_xor 00:05:12.295 ************************************ 00:05:12.295 04:06:00 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:12.295 04:06:00 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:12.295 04:06:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.295 04:06:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.295 ************************************ 00:05:12.295 START TEST accel_dif_verify 00:05:12.295 ************************************ 00:05:12.295 04:06:00 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:12.295 04:06:00 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:12.296 [2024-05-15 04:06:00.105941] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:12.296 [2024-05-15 04:06:00.106017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257164 ] 00:05:12.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.296 [2024-05-15 04:06:00.181793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.296 [2024-05-15 04:06:00.305798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.554 04:06:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:13.926 04:06:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.926 00:05:13.926 real 0m1.485s 00:05:13.926 user 0m1.337s 00:05:13.926 sys 0m0.148s 00:05:13.926 04:06:01 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.926 04:06:01 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:13.926 ************************************ 00:05:13.926 END TEST accel_dif_verify 00:05:13.926 ************************************ 00:05:13.926 04:06:01 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:13.926 04:06:01 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:13.926 04:06:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.926 04:06:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.926 ************************************ 00:05:13.926 START TEST accel_dif_generate 00:05:13.926 ************************************ 00:05:13.926 04:06:01 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:13.926 [2024-05-15 04:06:01.640726] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:13.926 [2024-05-15 04:06:01.640790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257414 ] 00:05:13.926 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.926 [2024-05-15 04:06:01.714392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.926 [2024-05-15 04:06:01.836257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:13.926 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.927 04:06:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:15.301 04:06:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.301 00:05:15.301 real 0m1.497s 00:05:15.301 user 0m1.348s 00:05:15.301 sys 0m0.152s 00:05:15.301 04:06:03 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.301 04:06:03 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:15.301 ************************************ 00:05:15.301 END TEST accel_dif_generate 00:05:15.301 ************************************ 00:05:15.301 04:06:03 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:15.301 04:06:03 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:15.301 04:06:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.301 04:06:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.301 ************************************ 00:05:15.301 START TEST accel_dif_generate_copy 00:05:15.301 ************************************ 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:15.301 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:15.301 [2024-05-15 04:06:03.185538] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:15.301 [2024-05-15 04:06:03.185605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257581 ] 00:05:15.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.301 [2024-05-15 04:06:03.260352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.564 [2024-05-15 04:06:03.384788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.564 04:06:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.936 00:05:16.936 real 0m1.501s 00:05:16.936 user 0m1.346s 00:05:16.936 sys 0m0.158s 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.936 04:06:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:16.936 ************************************ 00:05:16.936 END TEST accel_dif_generate_copy 00:05:16.936 ************************************ 00:05:16.936 04:06:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:16.936 04:06:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.936 04:06:04 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:16.936 04:06:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.936 04:06:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.936 ************************************ 00:05:16.936 START TEST accel_comp 00:05:16.936 ************************************ 00:05:16.936 04:06:04 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.936 04:06:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:16.936 04:06:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:16.936 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:16.937 04:06:04 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:16.937 [2024-05-15 04:06:04.737115] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:16.937 [2024-05-15 04:06:04.737180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257855 ] 00:05:16.937 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.937 [2024-05-15 04:06:04.811319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.937 [2024-05-15 04:06:04.932882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:17.194 04:06:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:18.567 04:06:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.567 00:05:18.567 real 0m1.490s 00:05:18.567 user 0m1.342s 00:05:18.567 sys 0m0.151s 00:05:18.567 04:06:06 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.567 04:06:06 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:18.567 ************************************ 00:05:18.567 END TEST accel_comp 00:05:18.567 ************************************ 00:05:18.567 04:06:06 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.567 04:06:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:18.567 04:06:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.567 04:06:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.567 ************************************ 00:05:18.567 START TEST accel_decomp 00:05:18.567 ************************************ 00:05:18.567 04:06:06 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.567 04:06:06 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:18.567 04:06:06 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:18.567 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.567 04:06:06 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.567 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:18.568 [2024-05-15 04:06:06.274791] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:18.568 [2024-05-15 04:06:06.274853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258016 ] 00:05:18.568 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.568 [2024-05-15 04:06:06.348972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.568 [2024-05-15 04:06:06.471686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.568 04:06:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.943 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:19.944 04:06:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.944 00:05:19.944 real 0m1.493s 00:05:19.944 user 0m1.343s 00:05:19.944 sys 0m0.153s 00:05:19.944 04:06:07 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.944 04:06:07 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 ************************************ 00:05:19.944 END TEST accel_decomp 00:05:19.944 ************************************ 00:05:19.944 04:06:07 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.944 04:06:07 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:19.944 04:06:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.944 04:06:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 ************************************ 00:05:19.944 START TEST accel_decmop_full 00:05:19.944 ************************************ 00:05:19.944 04:06:07 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:19.944 04:06:07 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:19.944 [2024-05-15 04:06:07.817067] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:19.944 [2024-05-15 04:06:07.817133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258464 ] 00:05:19.944 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.944 [2024-05-15 04:06:07.894341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.203 [2024-05-15 04:06:08.015610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.203 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:20.204 04:06:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.579 04:06:09 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.579 00:05:21.579 real 0m1.501s 00:05:21.579 user 0m1.343s 00:05:21.579 sys 0m0.159s 00:05:21.579 04:06:09 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.579 04:06:09 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:21.579 ************************************ 00:05:21.579 END TEST accel_decmop_full 00:05:21.579 ************************************ 00:05:21.579 04:06:09 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.579 04:06:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:21.579 04:06:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.579 04:06:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.579 ************************************ 00:05:21.579 START TEST accel_decomp_mcore 00:05:21.579 ************************************ 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:21.579 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:21.579 [2024-05-15 04:06:09.367759] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:21.579 [2024-05-15 04:06:09.367823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258946 ] 00:05:21.579 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.580 [2024-05-15 04:06:09.441541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.580 [2024-05-15 04:06:09.566276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.580 [2024-05-15 04:06:09.566331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.580 [2024-05-15 04:06:09.566384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.580 [2024-05-15 04:06:09.566387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.838 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.839 04:06:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.214 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.215 00:05:23.215 real 0m1.497s 00:05:23.215 user 0m4.777s 00:05:23.215 sys 0m0.153s 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.215 04:06:10 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:23.215 ************************************ 00:05:23.215 END TEST accel_decomp_mcore 00:05:23.215 ************************************ 00:05:23.215 04:06:10 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:23.215 04:06:10 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:23.215 04:06:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.215 04:06:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.215 ************************************ 00:05:23.215 START TEST accel_decomp_full_mcore 00:05:23.215 ************************************ 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:23.215 04:06:10 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:23.215 [2024-05-15 04:06:10.919222] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:23.215 [2024-05-15 04:06:10.919286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259116 ] 00:05:23.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.215 [2024-05-15 04:06:10.994776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.215 [2024-05-15 04:06:11.121059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.215 [2024-05-15 04:06:11.121111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.215 [2024-05-15 04:06:11.121166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.215 [2024-05-15 04:06:11.121169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.215 04:06:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.591 00:05:24.591 real 0m1.519s 00:05:24.591 user 0m4.858s 00:05:24.591 sys 0m0.158s 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.591 04:06:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:24.591 ************************************ 00:05:24.591 END TEST accel_decomp_full_mcore 00:05:24.591 ************************************ 00:05:24.591 04:06:12 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.591 04:06:12 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:24.591 04:06:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.591 04:06:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.591 ************************************ 00:05:24.591 START TEST accel_decomp_mthread 00:05:24.591 ************************************ 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:24.591 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:24.591 [2024-05-15 04:06:12.494448] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:24.591 [2024-05-15 04:06:12.494513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259394 ] 00:05:24.591 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.591 [2024-05-15 04:06:12.569105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.850 [2024-05-15 04:06:12.691338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.850 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 04:06:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.225 00:05:26.225 real 0m1.510s 00:05:26.225 user 0m1.362s 00:05:26.225 sys 0m0.151s 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.225 04:06:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:26.225 ************************************ 00:05:26.225 END TEST accel_decomp_mthread 00:05:26.225 ************************************ 00:05:26.225 04:06:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:26.225 04:06:14 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:26.225 04:06:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.225 04:06:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.225 ************************************ 00:05:26.225 START TEST accel_decomp_full_mthread 00:05:26.225 ************************************ 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:26.225 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:26.225 [2024-05-15 04:06:14.058211] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:26.225 [2024-05-15 04:06:14.058275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259545 ] 00:05:26.225 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.225 [2024-05-15 04:06:14.131512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.483 [2024-05-15 04:06:14.254212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.483 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.483 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.483 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.483 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.483 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.483 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.483 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:26.484 04:06:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.905 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.905 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.905 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.905 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.905 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.905 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.906 00:05:27.906 real 0m1.519s 00:05:27.906 user 0m1.367s 00:05:27.906 sys 0m0.155s 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.906 04:06:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:27.906 ************************************ 00:05:27.906 END TEST accel_decomp_full_mthread 00:05:27.906 ************************************ 00:05:27.906 04:06:15 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:27.906 04:06:15 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:27.906 04:06:15 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:27.906 04:06:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.906 04:06:15 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:27.906 04:06:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.906 04:06:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.906 04:06:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.906 04:06:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.906 04:06:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.906 04:06:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.906 04:06:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:27.906 04:06:15 accel -- accel/accel.sh@41 -- # jq -r . 00:05:27.906 ************************************ 00:05:27.906 START TEST accel_dif_functional_tests 00:05:27.906 ************************************ 00:05:27.906 04:06:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:27.906 [2024-05-15 04:06:15.650112] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:27.906 [2024-05-15 04:06:15.650187] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259717 ] 00:05:27.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.906 [2024-05-15 04:06:15.729696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.906 [2024-05-15 04:06:15.853337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.906 [2024-05-15 04:06:15.853389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.906 [2024-05-15 04:06:15.853393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.165 00:05:28.165 00:05:28.165 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.165 http://cunit.sourceforge.net/ 00:05:28.165 00:05:28.165 00:05:28.165 Suite: accel_dif 00:05:28.165 Test: verify: DIF generated, GUARD check ...passed 00:05:28.165 Test: verify: DIF generated, APPTAG check ...passed 00:05:28.165 Test: verify: DIF generated, REFTAG check ...passed 00:05:28.165 Test: verify: DIF not generated, GUARD check ...[2024-05-15 04:06:15.955467] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:28.165 [2024-05-15 04:06:15.955540] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:28.165 passed 00:05:28.165 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 04:06:15.955587] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:28.165 [2024-05-15 04:06:15.955628] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:28.165 passed 00:05:28.165 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 04:06:15.955667] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:28.165 [2024-05-15 04:06:15.955709] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:28.165 passed 00:05:28.165 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:28.165 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 04:06:15.955783] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:28.165 passed 00:05:28.165 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:28.165 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:28.165 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:28.165 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 04:06:15.955974] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:28.165 passed 00:05:28.165 Test: generate copy: DIF generated, GUARD check ...passed 00:05:28.165 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:28.165 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:28.165 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:28.165 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:28.165 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:28.165 Test: generate copy: iovecs-len validate ...[2024-05-15 04:06:15.956272] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:28.165 passed 00:05:28.165 Test: generate copy: buffer alignment validate ...passed 00:05:28.165 00:05:28.166 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.166 suites 1 1 n/a 0 0 00:05:28.166 tests 20 20 20 0 0 00:05:28.166 asserts 204 204 204 0 n/a 00:05:28.166 00:05:28.166 Elapsed time = 0.003 seconds 00:05:28.425 00:05:28.425 real 0m0.619s 00:05:28.425 user 0m0.930s 00:05:28.425 sys 0m0.186s 00:05:28.425 04:06:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.425 04:06:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 ************************************ 00:05:28.425 END TEST accel_dif_functional_tests 00:05:28.425 ************************************ 00:05:28.425 00:05:28.425 real 0m34.473s 00:05:28.425 user 0m37.774s 00:05:28.425 sys 0m4.889s 00:05:28.425 04:06:16 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.425 04:06:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 ************************************ 00:05:28.425 END TEST accel 00:05:28.425 ************************************ 00:05:28.425 04:06:16 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:28.425 04:06:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.425 04:06:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.425 04:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 ************************************ 00:05:28.425 START TEST accel_rpc 00:05:28.425 ************************************ 00:05:28.425 04:06:16 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:28.425 * Looking for test storage... 00:05:28.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:28.425 04:06:16 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:28.425 04:06:16 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3259894 00:05:28.425 04:06:16 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:28.425 04:06:16 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3259894 00:05:28.425 04:06:16 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3259894 ']' 00:05:28.425 04:06:16 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.425 04:06:16 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.425 04:06:16 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.425 04:06:16 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.425 04:06:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 [2024-05-15 04:06:16.410057] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:28.425 [2024-05-15 04:06:16.410144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259894 ] 00:05:28.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.685 [2024-05-15 04:06:16.478718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.685 [2024-05-15 04:06:16.591082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.685 04:06:16 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.685 04:06:16 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:28.685 04:06:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:28.685 04:06:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:28.685 04:06:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:28.685 04:06:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:28.685 04:06:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:28.685 04:06:16 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.685 04:06:16 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.685 04:06:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.685 ************************************ 00:05:28.685 START TEST accel_assign_opcode 00:05:28.685 ************************************ 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.685 [2024-05-15 04:06:16.659701] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.685 [2024-05-15 04:06:16.667699] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.685 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.944 software 00:05:28.944 00:05:28.944 real 0m0.302s 00:05:28.944 user 0m0.043s 00:05:28.944 sys 0m0.008s 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.944 04:06:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.944 ************************************ 00:05:28.944 END TEST accel_assign_opcode 00:05:28.944 ************************************ 00:05:29.203 04:06:16 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3259894 00:05:29.203 04:06:16 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3259894 ']' 00:05:29.203 04:06:16 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3259894 00:05:29.203 04:06:16 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:05:29.203 04:06:16 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.203 04:06:16 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3259894 00:05:29.203 04:06:17 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.203 04:06:17 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.203 04:06:17 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3259894' 00:05:29.203 killing process with pid 3259894 00:05:29.203 04:06:17 accel_rpc -- common/autotest_common.sh@965 -- # kill 3259894 00:05:29.203 04:06:17 accel_rpc -- common/autotest_common.sh@970 -- # wait 3259894 00:05:29.772 00:05:29.772 real 0m1.184s 00:05:29.772 user 0m1.133s 00:05:29.772 sys 0m0.427s 00:05:29.772 04:06:17 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.772 04:06:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 ************************************ 00:05:29.772 END TEST accel_rpc 00:05:29.772 ************************************ 00:05:29.772 04:06:17 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:29.772 04:06:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.772 04:06:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.772 04:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.772 ************************************ 00:05:29.772 START TEST app_cmdline 00:05:29.772 ************************************ 00:05:29.772 04:06:17 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:29.772 * Looking for test storage... 00:05:29.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:29.772 04:06:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:29.772 04:06:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3260110 00:05:29.772 04:06:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:29.772 04:06:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3260110 00:05:29.772 04:06:17 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3260110 ']' 00:05:29.772 04:06:17 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.772 04:06:17 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.773 04:06:17 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.773 04:06:17 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.773 04:06:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.773 [2024-05-15 04:06:17.646351] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:29.773 [2024-05-15 04:06:17.646438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260110 ] 00:05:29.773 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.773 [2024-05-15 04:06:17.714604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.031 [2024-05-15 04:06:17.822740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.290 04:06:18 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.290 04:06:18 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:05:30.290 04:06:18 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:30.549 { 00:05:30.549 "version": "SPDK v24.05-pre git sha1 2dc74a001", 00:05:30.549 "fields": { 00:05:30.549 "major": 24, 00:05:30.549 "minor": 5, 00:05:30.549 "patch": 0, 00:05:30.549 "suffix": "-pre", 00:05:30.549 "commit": "2dc74a001" 00:05:30.549 } 00:05:30.549 } 00:05:30.549 04:06:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:30.549 04:06:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:30.549 04:06:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:30.549 04:06:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:30.549 04:06:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:30.549 04:06:18 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.550 04:06:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:30.550 04:06:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.550 04:06:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:30.550 04:06:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:30.550 04:06:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:30.550 04:06:18 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.809 request: 00:05:30.809 { 00:05:30.809 "method": "env_dpdk_get_mem_stats", 00:05:30.809 "req_id": 1 00:05:30.809 } 00:05:30.809 Got JSON-RPC error response 00:05:30.809 response: 00:05:30.809 { 00:05:30.809 "code": -32601, 00:05:30.809 "message": "Method not found" 00:05:30.809 } 00:05:30.809 04:06:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.810 04:06:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3260110 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3260110 ']' 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3260110 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3260110 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3260110' 00:05:30.810 killing process with pid 3260110 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@965 -- # kill 3260110 00:05:30.810 04:06:18 app_cmdline -- common/autotest_common.sh@970 -- # wait 3260110 00:05:31.378 00:05:31.378 real 0m1.666s 00:05:31.378 user 0m2.052s 00:05:31.378 sys 0m0.475s 00:05:31.378 04:06:19 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.378 04:06:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:31.378 ************************************ 00:05:31.378 END TEST app_cmdline 00:05:31.378 ************************************ 00:05:31.378 04:06:19 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:31.378 04:06:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.378 04:06:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.378 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:31.378 ************************************ 00:05:31.378 START TEST version 00:05:31.378 ************************************ 00:05:31.378 04:06:19 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:31.378 * Looking for test storage... 00:05:31.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:31.378 04:06:19 version -- app/version.sh@17 -- # get_header_version major 00:05:31.378 04:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # cut -f2 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.378 04:06:19 version -- app/version.sh@17 -- # major=24 00:05:31.378 04:06:19 version -- app/version.sh@18 -- # get_header_version minor 00:05:31.378 04:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # cut -f2 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.378 04:06:19 version -- app/version.sh@18 -- # minor=5 00:05:31.378 04:06:19 version -- app/version.sh@19 -- # get_header_version patch 00:05:31.378 04:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # cut -f2 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.378 04:06:19 version -- app/version.sh@19 -- # patch=0 00:05:31.378 04:06:19 version -- app/version.sh@20 -- # get_header_version suffix 00:05:31.378 04:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # cut -f2 00:05:31.378 04:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.378 04:06:19 version -- app/version.sh@20 -- # suffix=-pre 00:05:31.378 04:06:19 version -- app/version.sh@22 -- # version=24.5 00:05:31.378 04:06:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:31.378 04:06:19 version -- app/version.sh@28 -- # version=24.5rc0 00:05:31.378 04:06:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:31.378 04:06:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:31.378 04:06:19 version -- app/version.sh@30 -- # py_version=24.5rc0 00:05:31.378 04:06:19 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:31.378 00:05:31.378 real 0m0.106s 00:05:31.378 user 0m0.053s 00:05:31.378 sys 0m0.075s 00:05:31.378 04:06:19 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.378 04:06:19 version -- common/autotest_common.sh@10 -- # set +x 00:05:31.378 ************************************ 00:05:31.378 END TEST version 00:05:31.378 ************************************ 00:05:31.378 04:06:19 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@194 -- # uname -s 00:05:31.638 04:06:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:31.638 04:06:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:31.638 04:06:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:31.638 04:06:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:31.638 04:06:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.638 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:31.638 04:06:19 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:05:31.638 04:06:19 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:05:31.638 04:06:19 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:31.638 04:06:19 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:31.638 04:06:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.638 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:31.638 ************************************ 00:05:31.638 START TEST nvmf_tcp 00:05:31.638 ************************************ 00:05:31.638 04:06:19 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:31.638 * Looking for test storage... 00:05:31.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.638 04:06:19 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.638 04:06:19 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.638 04:06:19 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.638 04:06:19 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.638 04:06:19 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.638 04:06:19 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.638 04:06:19 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:31.638 04:06:19 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:31.638 04:06:19 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.638 04:06:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:31.638 04:06:19 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:31.638 04:06:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:31.638 04:06:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.638 04:06:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.638 ************************************ 00:05:31.638 START TEST nvmf_example 00:05:31.638 ************************************ 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:31.638 * Looking for test storage... 00:05:31.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.638 04:06:19 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:31.639 04:06:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:34.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:34.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:34.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:34.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.172 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.430 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:34.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:05:34.431 00:05:34.431 --- 10.0.0.2 ping statistics --- 00:05:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.431 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:05:34.431 00:05:34.431 --- 10.0.0.1 ping statistics --- 00:05:34.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.431 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3262424 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3262424 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3262424 ']' 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.431 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:34.690 04:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:34.948 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.152 Initializing NVMe Controllers 00:05:47.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:47.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:47.152 Initialization complete. Launching workers. 00:05:47.152 ======================================================== 00:05:47.152 Latency(us) 00:05:47.152 Device Information : IOPS MiB/s Average min max 00:05:47.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13067.29 51.04 4897.29 902.27 15874.24 00:05:47.152 ======================================================== 00:05:47.152 Total : 13067.29 51.04 4897.29 902.27 15874.24 00:05:47.152 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:47.152 rmmod nvme_tcp 00:05:47.152 rmmod nvme_fabrics 00:05:47.152 rmmod nvme_keyring 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3262424 ']' 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3262424 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3262424 ']' 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3262424 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3262424 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3262424' 00:05:47.152 killing process with pid 3262424 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3262424 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3262424 00:05:47.152 nvmf threads initialize successfully 00:05:47.152 bdev subsystem init successfully 00:05:47.152 created a nvmf target service 00:05:47.152 create targets's poll groups done 00:05:47.152 all subsystems of target started 00:05:47.152 nvmf target is running 00:05:47.152 all subsystems of target stopped 00:05:47.152 destroy targets's poll groups done 00:05:47.152 destroyed the nvmf target service 00:05:47.152 bdev subsystem finish successfully 00:05:47.152 nvmf threads destroy successfully 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:47.152 04:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.409 04:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:47.409 04:06:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:47.409 04:06:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.409 04:06:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.669 00:05:47.669 real 0m15.889s 00:05:47.669 user 0m42.712s 00:05:47.669 sys 0m3.676s 00:05:47.669 04:06:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.669 04:06:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.669 ************************************ 00:05:47.669 END TEST nvmf_example 00:05:47.669 ************************************ 00:05:47.669 04:06:35 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:47.669 04:06:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:47.669 04:06:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.669 04:06:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.669 ************************************ 00:05:47.669 START TEST nvmf_filesystem 00:05:47.669 ************************************ 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:47.669 * Looking for test storage... 00:05:47.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:47.669 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:47.670 #define SPDK_CONFIG_H 00:05:47.670 #define SPDK_CONFIG_APPS 1 00:05:47.670 #define SPDK_CONFIG_ARCH native 00:05:47.670 #undef SPDK_CONFIG_ASAN 00:05:47.670 #undef SPDK_CONFIG_AVAHI 00:05:47.670 #undef SPDK_CONFIG_CET 00:05:47.670 #define SPDK_CONFIG_COVERAGE 1 00:05:47.670 #define SPDK_CONFIG_CROSS_PREFIX 00:05:47.670 #undef SPDK_CONFIG_CRYPTO 00:05:47.670 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:47.670 #undef SPDK_CONFIG_CUSTOMOCF 00:05:47.670 #undef SPDK_CONFIG_DAOS 00:05:47.670 #define SPDK_CONFIG_DAOS_DIR 00:05:47.670 #define SPDK_CONFIG_DEBUG 1 00:05:47.670 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:47.670 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:47.670 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:47.670 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:47.670 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:47.670 #undef SPDK_CONFIG_DPDK_UADK 00:05:47.670 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:47.670 #define SPDK_CONFIG_EXAMPLES 1 00:05:47.670 #undef SPDK_CONFIG_FC 00:05:47.670 #define SPDK_CONFIG_FC_PATH 00:05:47.670 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:47.670 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:47.670 #undef SPDK_CONFIG_FUSE 00:05:47.670 #undef SPDK_CONFIG_FUZZER 00:05:47.670 #define SPDK_CONFIG_FUZZER_LIB 00:05:47.670 #undef SPDK_CONFIG_GOLANG 00:05:47.670 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:47.670 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:47.670 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:47.670 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:47.670 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:47.670 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:47.670 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:47.670 #define SPDK_CONFIG_IDXD 1 00:05:47.670 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:47.670 #undef SPDK_CONFIG_IPSEC_MB 00:05:47.670 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:47.670 #define SPDK_CONFIG_ISAL 1 00:05:47.670 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:47.670 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:47.670 #define SPDK_CONFIG_LIBDIR 00:05:47.670 #undef SPDK_CONFIG_LTO 00:05:47.670 #define SPDK_CONFIG_MAX_LCORES 00:05:47.670 #define SPDK_CONFIG_NVME_CUSE 1 00:05:47.670 #undef SPDK_CONFIG_OCF 00:05:47.670 #define SPDK_CONFIG_OCF_PATH 00:05:47.670 #define SPDK_CONFIG_OPENSSL_PATH 00:05:47.670 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:47.670 #define SPDK_CONFIG_PGO_DIR 00:05:47.670 #undef SPDK_CONFIG_PGO_USE 00:05:47.670 #define SPDK_CONFIG_PREFIX /usr/local 00:05:47.670 #undef SPDK_CONFIG_RAID5F 00:05:47.670 #undef SPDK_CONFIG_RBD 00:05:47.670 #define SPDK_CONFIG_RDMA 1 00:05:47.670 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:47.670 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:47.670 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:47.670 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:47.670 #define SPDK_CONFIG_SHARED 1 00:05:47.670 #undef SPDK_CONFIG_SMA 00:05:47.670 #define SPDK_CONFIG_TESTS 1 00:05:47.670 #undef SPDK_CONFIG_TSAN 00:05:47.670 #define SPDK_CONFIG_UBLK 1 00:05:47.670 #define SPDK_CONFIG_UBSAN 1 00:05:47.670 #undef SPDK_CONFIG_UNIT_TESTS 00:05:47.670 #undef SPDK_CONFIG_URING 00:05:47.670 #define SPDK_CONFIG_URING_PATH 00:05:47.670 #undef SPDK_CONFIG_URING_ZNS 00:05:47.670 #undef SPDK_CONFIG_USDT 00:05:47.670 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:47.670 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:47.670 #define SPDK_CONFIG_VFIO_USER 1 00:05:47.670 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:47.670 #define SPDK_CONFIG_VHOST 1 00:05:47.670 #define SPDK_CONFIG_VIRTIO 1 00:05:47.670 #undef SPDK_CONFIG_VTUNE 00:05:47.670 #define SPDK_CONFIG_VTUNE_DIR 00:05:47.670 #define SPDK_CONFIG_WERROR 1 00:05:47.670 #define SPDK_CONFIG_WPDK_DIR 00:05:47.670 #undef SPDK_CONFIG_XNVME 00:05:47.670 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:47.670 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3264125 ]] 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3264125 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.EbMxEi 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EbMxEi/tests/target /tmp/spdk.EbMxEi 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:05:47.671 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=968667136 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4315762688 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=48405016576 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=13589712896 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941728768 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389986304 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8962048 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995693568 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1671168 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:05:47.672 * Looking for test storage... 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=48405016576 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=15804305408 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:47.672 04:06:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:50.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:50.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:50.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:50.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:50.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:50.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:05:50.201 00:05:50.201 --- 10.0.0.2 ping statistics --- 00:05:50.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.201 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:05:50.201 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:50.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:05:50.202 00:05:50.202 --- 10.0.0.1 ping statistics --- 00:05:50.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.202 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:50.202 04:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.460 ************************************ 00:05:50.460 START TEST nvmf_filesystem_no_in_capsule 00:05:50.460 ************************************ 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3266044 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3266044 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3266044 ']' 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.460 04:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:50.460 [2024-05-15 04:06:38.303022] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:05:50.460 [2024-05-15 04:06:38.303114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:50.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.460 [2024-05-15 04:06:38.380485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.720 [2024-05-15 04:06:38.494879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:50.720 [2024-05-15 04:06:38.494924] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:50.720 [2024-05-15 04:06:38.494962] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.720 [2024-05-15 04:06:38.494984] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.720 [2024-05-15 04:06:38.494993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:50.720 [2024-05-15 04:06:38.495048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.720 [2024-05-15 04:06:38.495106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.720 [2024-05-15 04:06:38.495172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.720 [2024-05-15 04:06:38.495175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.328 [2024-05-15 04:06:39.278963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.328 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.587 Malloc1 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.587 [2024-05-15 04:06:39.465351] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:51.587 [2024-05-15 04:06:39.465636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.587 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:05:51.587 { 00:05:51.587 "name": "Malloc1", 00:05:51.587 "aliases": [ 00:05:51.587 "62d400bd-4c0b-46b7-a309-bde172b7bc48" 00:05:51.587 ], 00:05:51.587 "product_name": "Malloc disk", 00:05:51.587 "block_size": 512, 00:05:51.587 "num_blocks": 1048576, 00:05:51.587 "uuid": "62d400bd-4c0b-46b7-a309-bde172b7bc48", 00:05:51.587 "assigned_rate_limits": { 00:05:51.587 "rw_ios_per_sec": 0, 00:05:51.587 "rw_mbytes_per_sec": 0, 00:05:51.587 "r_mbytes_per_sec": 0, 00:05:51.587 "w_mbytes_per_sec": 0 00:05:51.587 }, 00:05:51.587 "claimed": true, 00:05:51.587 "claim_type": "exclusive_write", 00:05:51.587 "zoned": false, 00:05:51.587 "supported_io_types": { 00:05:51.587 "read": true, 00:05:51.587 "write": true, 00:05:51.587 "unmap": true, 00:05:51.587 "write_zeroes": true, 00:05:51.587 "flush": true, 00:05:51.587 "reset": true, 00:05:51.587 "compare": false, 00:05:51.587 "compare_and_write": false, 00:05:51.587 "abort": true, 00:05:51.587 "nvme_admin": false, 00:05:51.587 "nvme_io": false 00:05:51.587 }, 00:05:51.587 "memory_domains": [ 00:05:51.587 { 00:05:51.587 "dma_device_id": "system", 00:05:51.587 "dma_device_type": 1 00:05:51.587 }, 00:05:51.587 { 00:05:51.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.588 "dma_device_type": 2 00:05:51.588 } 00:05:51.588 ], 00:05:51.588 "driver_specific": {} 00:05:51.588 } 00:05:51.588 ]' 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:51.588 04:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:52.153 04:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:52.153 04:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:05:52.153 04:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:05:52.153 04:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:05:52.153 04:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:54.681 04:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:55.246 04:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.620 ************************************ 00:05:56.620 START TEST filesystem_ext4 00:05:56.620 ************************************ 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:05:56.620 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:56.620 mke2fs 1.46.5 (30-Dec-2021) 00:05:56.621 Discarding device blocks: 0/522240 done 00:05:56.621 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:56.621 Filesystem UUID: b2080094-939a-4e8f-90e7-a360226b99b5 00:05:56.621 Superblock backups stored on blocks: 00:05:56.621 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:56.621 00:05:56.621 Allocating group tables: 0/64 done 00:05:56.621 Writing inode tables: 0/64 done 00:05:56.621 Creating journal (8192 blocks): done 00:05:56.621 Writing superblocks and filesystem accounting information: 0/64 done 00:05:56.621 00:05:56.621 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:05:56.621 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3266044 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:56.879 00:05:56.879 real 0m0.551s 00:05:56.879 user 0m0.020s 00:05:56.879 sys 0m0.032s 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:56.879 ************************************ 00:05:56.879 END TEST filesystem_ext4 00:05:56.879 ************************************ 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.879 ************************************ 00:05:56.879 START TEST filesystem_btrfs 00:05:56.879 ************************************ 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:05:56.879 04:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:57.446 btrfs-progs v6.6.2 00:05:57.446 See https://btrfs.readthedocs.io for more information. 00:05:57.446 00:05:57.446 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:57.446 NOTE: several default settings have changed in version 5.15, please make sure 00:05:57.446 this does not affect your deployments: 00:05:57.446 - DUP for metadata (-m dup) 00:05:57.446 - enabled no-holes (-O no-holes) 00:05:57.446 - enabled free-space-tree (-R free-space-tree) 00:05:57.446 00:05:57.446 Label: (null) 00:05:57.446 UUID: 3d4c8c7f-845c-42ad-a96f-6843215b5ed6 00:05:57.446 Node size: 16384 00:05:57.446 Sector size: 4096 00:05:57.446 Filesystem size: 510.00MiB 00:05:57.446 Block group profiles: 00:05:57.446 Data: single 8.00MiB 00:05:57.446 Metadata: DUP 32.00MiB 00:05:57.446 System: DUP 8.00MiB 00:05:57.446 SSD detected: yes 00:05:57.446 Zoned device: no 00:05:57.446 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:57.446 Runtime features: free-space-tree 00:05:57.446 Checksum: crc32c 00:05:57.446 Number of devices: 1 00:05:57.446 Devices: 00:05:57.446 ID SIZE PATH 00:05:57.446 1 510.00MiB /dev/nvme0n1p1 00:05:57.446 00:05:57.446 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:05:57.446 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3266044 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:57.705 00:05:57.705 real 0m0.809s 00:05:57.705 user 0m0.011s 00:05:57.705 sys 0m0.054s 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:57.705 ************************************ 00:05:57.705 END TEST filesystem_btrfs 00:05:57.705 ************************************ 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.705 ************************************ 00:05:57.705 START TEST filesystem_xfs 00:05:57.705 ************************************ 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:05:57.705 04:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:57.963 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:57.963 = sectsz=512 attr=2, projid32bit=1 00:05:57.963 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:57.963 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:57.963 data = bsize=4096 blocks=130560, imaxpct=25 00:05:57.963 = sunit=0 swidth=0 blks 00:05:57.963 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:57.963 log =internal log bsize=4096 blocks=16384, version=2 00:05:57.963 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:57.963 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:58.897 Discarding blocks...Done. 00:05:58.897 04:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:05:58.897 04:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3266044 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:00.796 00:06:00.796 real 0m2.652s 00:06:00.796 user 0m0.011s 00:06:00.796 sys 0m0.047s 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:00.796 ************************************ 00:06:00.796 END TEST filesystem_xfs 00:06:00.796 ************************************ 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:00.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:00.796 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3266044 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3266044 ']' 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3266044 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3266044 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3266044' 00:06:00.797 killing process with pid 3266044 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3266044 00:06:00.797 [2024-05-15 04:06:48.757669] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:00.797 04:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3266044 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:01.365 00:06:01.365 real 0m10.997s 00:06:01.365 user 0m42.032s 00:06:01.365 sys 0m1.659s 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.365 ************************************ 00:06:01.365 END TEST nvmf_filesystem_no_in_capsule 00:06:01.365 ************************************ 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.365 ************************************ 00:06:01.365 START TEST nvmf_filesystem_in_capsule 00:06:01.365 ************************************ 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3267599 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3267599 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3267599 ']' 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.365 04:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.365 [2024-05-15 04:06:49.360763] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:06:01.365 [2024-05-15 04:06:49.360848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.624 [2024-05-15 04:06:49.443120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.624 [2024-05-15 04:06:49.563090] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:01.624 [2024-05-15 04:06:49.563156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:01.624 [2024-05-15 04:06:49.563173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.624 [2024-05-15 04:06:49.563186] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.624 [2024-05-15 04:06:49.563198] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:01.624 [2024-05-15 04:06:49.563268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.624 [2024-05-15 04:06:49.563323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.624 [2024-05-15 04:06:49.563354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.624 [2024-05-15 04:06:49.563357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 [2024-05-15 04:06:50.371190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 Malloc1 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.558 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 [2024-05-15 04:06:50.566356] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:02.559 [2024-05-15 04:06:50.566690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:02.559 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:02.817 { 00:06:02.817 "name": "Malloc1", 00:06:02.817 "aliases": [ 00:06:02.817 "b862dc20-6077-4ac2-86a1-25a675bb03eb" 00:06:02.817 ], 00:06:02.817 "product_name": "Malloc disk", 00:06:02.817 "block_size": 512, 00:06:02.817 "num_blocks": 1048576, 00:06:02.817 "uuid": "b862dc20-6077-4ac2-86a1-25a675bb03eb", 00:06:02.817 "assigned_rate_limits": { 00:06:02.817 "rw_ios_per_sec": 0, 00:06:02.817 "rw_mbytes_per_sec": 0, 00:06:02.817 "r_mbytes_per_sec": 0, 00:06:02.817 "w_mbytes_per_sec": 0 00:06:02.817 }, 00:06:02.817 "claimed": true, 00:06:02.817 "claim_type": "exclusive_write", 00:06:02.817 "zoned": false, 00:06:02.817 "supported_io_types": { 00:06:02.817 "read": true, 00:06:02.817 "write": true, 00:06:02.817 "unmap": true, 00:06:02.817 "write_zeroes": true, 00:06:02.817 "flush": true, 00:06:02.817 "reset": true, 00:06:02.817 "compare": false, 00:06:02.817 "compare_and_write": false, 00:06:02.817 "abort": true, 00:06:02.817 "nvme_admin": false, 00:06:02.817 "nvme_io": false 00:06:02.817 }, 00:06:02.817 "memory_domains": [ 00:06:02.817 { 00:06:02.817 "dma_device_id": "system", 00:06:02.817 "dma_device_type": 1 00:06:02.817 }, 00:06:02.817 { 00:06:02.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.817 "dma_device_type": 2 00:06:02.817 } 00:06:02.817 ], 00:06:02.817 "driver_specific": {} 00:06:02.817 } 00:06:02.817 ]' 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:02.817 04:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:03.383 04:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:03.383 04:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:03.383 04:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:03.383 04:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:03.383 04:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:05.281 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:05.282 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:05.282 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:05.282 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:05.282 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:05.540 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:05.797 04:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:06.765 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:06.765 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:06.765 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:06.765 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.765 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.024 ************************************ 00:06:07.024 START TEST filesystem_in_capsule_ext4 00:06:07.024 ************************************ 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:07.024 04:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:07.024 mke2fs 1.46.5 (30-Dec-2021) 00:06:07.024 Discarding device blocks: 0/522240 done 00:06:07.024 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:07.024 Filesystem UUID: 6b9b1192-009c-4fb7-b603-0a3e1acd0abe 00:06:07.024 Superblock backups stored on blocks: 00:06:07.024 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:07.024 00:06:07.024 Allocating group tables: 0/64 done 00:06:07.024 Writing inode tables: 0/64 done 00:06:07.590 Creating journal (8192 blocks): done 00:06:08.672 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:06:08.672 00:06:08.672 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:08.672 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.672 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.672 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3267599 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.931 00:06:08.931 real 0m1.924s 00:06:08.931 user 0m0.010s 00:06:08.931 sys 0m0.038s 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:08.931 ************************************ 00:06:08.931 END TEST filesystem_in_capsule_ext4 00:06:08.931 ************************************ 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.931 ************************************ 00:06:08.931 START TEST filesystem_in_capsule_btrfs 00:06:08.931 ************************************ 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:08.931 04:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:09.497 btrfs-progs v6.6.2 00:06:09.497 See https://btrfs.readthedocs.io for more information. 00:06:09.497 00:06:09.497 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:09.497 NOTE: several default settings have changed in version 5.15, please make sure 00:06:09.497 this does not affect your deployments: 00:06:09.497 - DUP for metadata (-m dup) 00:06:09.497 - enabled no-holes (-O no-holes) 00:06:09.497 - enabled free-space-tree (-R free-space-tree) 00:06:09.497 00:06:09.497 Label: (null) 00:06:09.497 UUID: 0368f976-dd5b-4abb-8ac2-7b1112b1d243 00:06:09.497 Node size: 16384 00:06:09.497 Sector size: 4096 00:06:09.497 Filesystem size: 510.00MiB 00:06:09.497 Block group profiles: 00:06:09.497 Data: single 8.00MiB 00:06:09.497 Metadata: DUP 32.00MiB 00:06:09.497 System: DUP 8.00MiB 00:06:09.497 SSD detected: yes 00:06:09.497 Zoned device: no 00:06:09.497 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:09.497 Runtime features: free-space-tree 00:06:09.497 Checksum: crc32c 00:06:09.497 Number of devices: 1 00:06:09.497 Devices: 00:06:09.497 ID SIZE PATH 00:06:09.497 1 510.00MiB /dev/nvme0n1p1 00:06:09.497 00:06:09.497 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:09.497 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3267599 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:10.064 04:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:10.064 00:06:10.064 real 0m1.246s 00:06:10.064 user 0m0.015s 00:06:10.064 sys 0m0.037s 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:10.064 ************************************ 00:06:10.064 END TEST filesystem_in_capsule_btrfs 00:06:10.064 ************************************ 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:10.064 ************************************ 00:06:10.064 START TEST filesystem_in_capsule_xfs 00:06:10.064 ************************************ 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:10.064 04:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:10.322 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:10.322 = sectsz=512 attr=2, projid32bit=1 00:06:10.322 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:10.322 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:10.322 data = bsize=4096 blocks=130560, imaxpct=25 00:06:10.322 = sunit=0 swidth=0 blks 00:06:10.322 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:10.322 log =internal log bsize=4096 blocks=16384, version=2 00:06:10.322 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:10.322 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:11.257 Discarding blocks...Done. 00:06:11.257 04:06:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:11.257 04:06:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:13.785 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:13.785 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:13.785 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:13.785 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:13.785 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:13.785 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:14.043 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3267599 00:06:14.043 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:14.044 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:14.044 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:14.044 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:14.044 00:06:14.044 real 0m3.775s 00:06:14.044 user 0m0.015s 00:06:14.044 sys 0m0.040s 00:06:14.044 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.044 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:14.044 ************************************ 00:06:14.044 END TEST filesystem_in_capsule_xfs 00:06:14.044 ************************************ 00:06:14.044 04:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:14.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3267599 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3267599 ']' 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3267599 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3267599 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3267599' 00:06:14.302 killing process with pid 3267599 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3267599 00:06:14.302 [2024-05-15 04:07:02.219096] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:14.302 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3267599 00:06:14.869 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:14.870 00:06:14.870 real 0m13.392s 00:06:14.870 user 0m51.327s 00:06:14.870 sys 0m1.877s 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.870 ************************************ 00:06:14.870 END TEST nvmf_filesystem_in_capsule 00:06:14.870 ************************************ 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:14.870 rmmod nvme_tcp 00:06:14.870 rmmod nvme_fabrics 00:06:14.870 rmmod nvme_keyring 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:14.870 04:07:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.402 04:07:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.402 00:06:17.402 real 0m29.346s 00:06:17.402 user 1m34.393s 00:06:17.402 sys 0m5.467s 00:06:17.402 04:07:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.402 04:07:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.402 ************************************ 00:06:17.402 END TEST nvmf_filesystem 00:06:17.402 ************************************ 00:06:17.402 04:07:04 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:17.402 04:07:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:17.402 04:07:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.402 04:07:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.402 ************************************ 00:06:17.402 START TEST nvmf_target_discovery 00:06:17.402 ************************************ 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:17.402 * Looking for test storage... 00:06:17.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.402 04:07:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:17.403 04:07:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:19.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:19.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:19.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:19.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:19.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:06:19.933 00:06:19.933 --- 10.0.0.2 ping statistics --- 00:06:19.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.933 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:06:19.933 00:06:19.933 --- 10.0.0.1 ping statistics --- 00:06:19.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.933 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:19.933 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3271636 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3271636 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3271636 ']' 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.934 04:07:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:19.934 [2024-05-15 04:07:07.663273] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:06:19.934 [2024-05-15 04:07:07.663365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.934 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.934 [2024-05-15 04:07:07.736981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.934 [2024-05-15 04:07:07.847717] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.934 [2024-05-15 04:07:07.847778] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.934 [2024-05-15 04:07:07.847806] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.934 [2024-05-15 04:07:07.847824] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.934 [2024-05-15 04:07:07.847833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.934 [2024-05-15 04:07:07.847984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.934 [2024-05-15 04:07:07.848043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.934 [2024-05-15 04:07:07.848014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.934 [2024-05-15 04:07:07.848041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 [2024-05-15 04:07:08.694141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 Null1 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 [2024-05-15 04:07:08.734185] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:20.864 [2024-05-15 04:07:08.734446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 Null2 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 Null3 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 Null4 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.864 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:21.121 00:06:21.121 Discovery Log Number of Records 6, Generation counter 6 00:06:21.121 =====Discovery Log Entry 0====== 00:06:21.121 trtype: tcp 00:06:21.121 adrfam: ipv4 00:06:21.121 subtype: current discovery subsystem 00:06:21.121 treq: not required 00:06:21.121 portid: 0 00:06:21.121 trsvcid: 4420 00:06:21.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:21.121 traddr: 10.0.0.2 00:06:21.121 eflags: explicit discovery connections, duplicate discovery information 00:06:21.121 sectype: none 00:06:21.121 =====Discovery Log Entry 1====== 00:06:21.121 trtype: tcp 00:06:21.121 adrfam: ipv4 00:06:21.121 subtype: nvme subsystem 00:06:21.121 treq: not required 00:06:21.121 portid: 0 00:06:21.121 trsvcid: 4420 00:06:21.121 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:21.121 traddr: 10.0.0.2 00:06:21.121 eflags: none 00:06:21.121 sectype: none 00:06:21.121 =====Discovery Log Entry 2====== 00:06:21.121 trtype: tcp 00:06:21.121 adrfam: ipv4 00:06:21.121 subtype: nvme subsystem 00:06:21.121 treq: not required 00:06:21.121 portid: 0 00:06:21.121 trsvcid: 4420 00:06:21.121 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:21.121 traddr: 10.0.0.2 00:06:21.121 eflags: none 00:06:21.121 sectype: none 00:06:21.121 =====Discovery Log Entry 3====== 00:06:21.121 trtype: tcp 00:06:21.121 adrfam: ipv4 00:06:21.121 subtype: nvme subsystem 00:06:21.121 treq: not required 00:06:21.121 portid: 0 00:06:21.121 trsvcid: 4420 00:06:21.121 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:21.121 traddr: 10.0.0.2 00:06:21.121 eflags: none 00:06:21.121 sectype: none 00:06:21.121 =====Discovery Log Entry 4====== 00:06:21.121 trtype: tcp 00:06:21.121 adrfam: ipv4 00:06:21.121 subtype: nvme subsystem 00:06:21.121 treq: not required 00:06:21.121 portid: 0 00:06:21.121 trsvcid: 4420 00:06:21.121 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:21.121 traddr: 10.0.0.2 00:06:21.121 eflags: none 00:06:21.121 sectype: none 00:06:21.121 =====Discovery Log Entry 5====== 00:06:21.121 trtype: tcp 00:06:21.121 adrfam: ipv4 00:06:21.121 subtype: discovery subsystem referral 00:06:21.121 treq: not required 00:06:21.121 portid: 0 00:06:21.121 trsvcid: 4430 00:06:21.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:21.121 traddr: 10.0.0.2 00:06:21.121 eflags: none 00:06:21.121 sectype: none 00:06:21.121 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:21.121 Perform nvmf subsystem discovery via RPC 00:06:21.121 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:21.121 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.121 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.121 [ 00:06:21.121 { 00:06:21.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:21.121 "subtype": "Discovery", 00:06:21.121 "listen_addresses": [ 00:06:21.121 { 00:06:21.121 "trtype": "TCP", 00:06:21.121 "adrfam": "IPv4", 00:06:21.121 "traddr": "10.0.0.2", 00:06:21.121 "trsvcid": "4420" 00:06:21.121 } 00:06:21.121 ], 00:06:21.121 "allow_any_host": true, 00:06:21.121 "hosts": [] 00:06:21.121 }, 00:06:21.121 { 00:06:21.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:21.121 "subtype": "NVMe", 00:06:21.121 "listen_addresses": [ 00:06:21.121 { 00:06:21.121 "trtype": "TCP", 00:06:21.121 "adrfam": "IPv4", 00:06:21.121 "traddr": "10.0.0.2", 00:06:21.121 "trsvcid": "4420" 00:06:21.121 } 00:06:21.121 ], 00:06:21.121 "allow_any_host": true, 00:06:21.121 "hosts": [], 00:06:21.121 "serial_number": "SPDK00000000000001", 00:06:21.121 "model_number": "SPDK bdev Controller", 00:06:21.121 "max_namespaces": 32, 00:06:21.121 "min_cntlid": 1, 00:06:21.121 "max_cntlid": 65519, 00:06:21.121 "namespaces": [ 00:06:21.121 { 00:06:21.121 "nsid": 1, 00:06:21.121 "bdev_name": "Null1", 00:06:21.121 "name": "Null1", 00:06:21.121 "nguid": "32DB0892C0974F63A9B825586F963BDB", 00:06:21.121 "uuid": "32db0892-c097-4f63-a9b8-25586f963bdb" 00:06:21.121 } 00:06:21.121 ] 00:06:21.121 }, 00:06:21.121 { 00:06:21.121 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:21.121 "subtype": "NVMe", 00:06:21.121 "listen_addresses": [ 00:06:21.121 { 00:06:21.121 "trtype": "TCP", 00:06:21.121 "adrfam": "IPv4", 00:06:21.121 "traddr": "10.0.0.2", 00:06:21.121 "trsvcid": "4420" 00:06:21.121 } 00:06:21.121 ], 00:06:21.121 "allow_any_host": true, 00:06:21.121 "hosts": [], 00:06:21.121 "serial_number": "SPDK00000000000002", 00:06:21.121 "model_number": "SPDK bdev Controller", 00:06:21.121 "max_namespaces": 32, 00:06:21.121 "min_cntlid": 1, 00:06:21.121 "max_cntlid": 65519, 00:06:21.121 "namespaces": [ 00:06:21.121 { 00:06:21.121 "nsid": 1, 00:06:21.122 "bdev_name": "Null2", 00:06:21.122 "name": "Null2", 00:06:21.122 "nguid": "C1DDC0005DA1481FA6A30D96AE02C3BA", 00:06:21.122 "uuid": "c1ddc000-5da1-481f-a6a3-0d96ae02c3ba" 00:06:21.122 } 00:06:21.122 ] 00:06:21.122 }, 00:06:21.122 { 00:06:21.122 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:21.122 "subtype": "NVMe", 00:06:21.122 "listen_addresses": [ 00:06:21.122 { 00:06:21.122 "trtype": "TCP", 00:06:21.122 "adrfam": "IPv4", 00:06:21.122 "traddr": "10.0.0.2", 00:06:21.122 "trsvcid": "4420" 00:06:21.122 } 00:06:21.122 ], 00:06:21.122 "allow_any_host": true, 00:06:21.122 "hosts": [], 00:06:21.122 "serial_number": "SPDK00000000000003", 00:06:21.122 "model_number": "SPDK bdev Controller", 00:06:21.122 "max_namespaces": 32, 00:06:21.122 "min_cntlid": 1, 00:06:21.122 "max_cntlid": 65519, 00:06:21.122 "namespaces": [ 00:06:21.122 { 00:06:21.122 "nsid": 1, 00:06:21.122 "bdev_name": "Null3", 00:06:21.122 "name": "Null3", 00:06:21.122 "nguid": "DA5BBC73575B47C882899146BC7BCA58", 00:06:21.122 "uuid": "da5bbc73-575b-47c8-8289-9146bc7bca58" 00:06:21.122 } 00:06:21.122 ] 00:06:21.122 }, 00:06:21.122 { 00:06:21.122 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:21.122 "subtype": "NVMe", 00:06:21.122 "listen_addresses": [ 00:06:21.122 { 00:06:21.122 "trtype": "TCP", 00:06:21.122 "adrfam": "IPv4", 00:06:21.122 "traddr": "10.0.0.2", 00:06:21.122 "trsvcid": "4420" 00:06:21.122 } 00:06:21.122 ], 00:06:21.122 "allow_any_host": true, 00:06:21.122 "hosts": [], 00:06:21.122 "serial_number": "SPDK00000000000004", 00:06:21.122 "model_number": "SPDK bdev Controller", 00:06:21.122 "max_namespaces": 32, 00:06:21.122 "min_cntlid": 1, 00:06:21.122 "max_cntlid": 65519, 00:06:21.122 "namespaces": [ 00:06:21.122 { 00:06:21.122 "nsid": 1, 00:06:21.122 "bdev_name": "Null4", 00:06:21.122 "name": "Null4", 00:06:21.122 "nguid": "297FEE551ACF439D8A0A138B4AE100EA", 00:06:21.122 "uuid": "297fee55-1acf-439d-8a0a-138b4ae100ea" 00:06:21.122 } 00:06:21.122 ] 00:06:21.122 } 00:06:21.122 ] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:21.122 rmmod nvme_tcp 00:06:21.122 rmmod nvme_fabrics 00:06:21.122 rmmod nvme_keyring 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3271636 ']' 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3271636 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3271636 ']' 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3271636 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3271636 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3271636' 00:06:21.122 killing process with pid 3271636 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3271636 00:06:21.122 [2024-05-15 04:07:09.120732] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:21.122 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3271636 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.381 04:07:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.936 04:07:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:23.936 00:06:23.936 real 0m6.546s 00:06:23.936 user 0m7.173s 00:06:23.936 sys 0m2.240s 00:06:23.936 04:07:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.936 04:07:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.936 ************************************ 00:06:23.936 END TEST nvmf_target_discovery 00:06:23.936 ************************************ 00:06:23.936 04:07:11 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:23.936 04:07:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:23.936 04:07:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.936 04:07:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.936 ************************************ 00:06:23.936 START TEST nvmf_referrals 00:06:23.936 ************************************ 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:23.936 * Looking for test storage... 00:06:23.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.936 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:23.937 04:07:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:26.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:26.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:26.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:26.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:26.472 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:26.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:06:26.473 00:06:26.473 --- 10.0.0.2 ping statistics --- 00:06:26.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.473 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:06:26.473 00:06:26.473 --- 10.0.0.1 ping statistics --- 00:06:26.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.473 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3274153 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3274153 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3274153 ']' 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.473 04:07:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.473 [2024-05-15 04:07:14.348389] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:06:26.473 [2024-05-15 04:07:14.348483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.473 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.473 [2024-05-15 04:07:14.424097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.731 [2024-05-15 04:07:14.535851] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.731 [2024-05-15 04:07:14.535906] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.731 [2024-05-15 04:07:14.535940] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.731 [2024-05-15 04:07:14.535953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.731 [2024-05-15 04:07:14.535963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.731 [2024-05-15 04:07:14.536037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.731 [2024-05-15 04:07:14.536112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.731 [2024-05-15 04:07:14.536082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.731 [2024-05-15 04:07:14.536114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 [2024-05-15 04:07:15.344013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 [2024-05-15 04:07:15.355964] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:27.665 [2024-05-15 04:07:15.356265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:27.665 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:27.666 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:27.924 04:07:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:28.182 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.183 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:28.441 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:28.441 rmmod nvme_tcp 00:06:28.441 rmmod nvme_fabrics 00:06:28.699 rmmod nvme_keyring 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3274153 ']' 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3274153 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3274153 ']' 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3274153 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3274153 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3274153' 00:06:28.700 killing process with pid 3274153 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3274153 00:06:28.700 [2024-05-15 04:07:16.512043] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:28.700 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3274153 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.959 04:07:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.866 04:07:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:30.866 00:06:30.866 real 0m7.351s 00:06:30.866 user 0m10.624s 00:06:30.866 sys 0m2.382s 00:06:30.866 04:07:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.866 04:07:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:30.866 ************************************ 00:06:30.866 END TEST nvmf_referrals 00:06:30.866 ************************************ 00:06:30.866 04:07:18 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:30.866 04:07:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:30.866 04:07:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.866 04:07:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.125 ************************************ 00:06:31.125 START TEST nvmf_connect_disconnect 00:06:31.125 ************************************ 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:31.125 * Looking for test storage... 00:06:31.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:31.125 04:07:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:33.659 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:33.659 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:33.659 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:33.660 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:33.660 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:33.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:06:33.660 00:06:33.660 --- 10.0.0.2 ping statistics --- 00:06:33.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.660 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:06:33.660 00:06:33.660 --- 10.0.0.1 ping statistics --- 00:06:33.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.660 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3276746 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3276746 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3276746 ']' 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.660 04:07:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.660 [2024-05-15 04:07:21.660639] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:06:33.660 [2024-05-15 04:07:21.660728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.918 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.918 [2024-05-15 04:07:21.740081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.918 [2024-05-15 04:07:21.851092] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.918 [2024-05-15 04:07:21.851151] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.918 [2024-05-15 04:07:21.851179] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.918 [2024-05-15 04:07:21.851191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.918 [2024-05-15 04:07:21.851201] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.918 [2024-05-15 04:07:21.851273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.918 [2024-05-15 04:07:21.851311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.918 [2024-05-15 04:07:21.851338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.918 [2024-05-15 04:07:21.851341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 [2024-05-15 04:07:22.696135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 [2024-05-15 04:07:22.757125] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:34.852 [2024-05-15 04:07:22.757459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:34.852 04:07:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:38.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:40.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.276 rmmod nvme_tcp 00:06:48.276 rmmod nvme_fabrics 00:06:48.276 rmmod nvme_keyring 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3276746 ']' 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3276746 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3276746 ']' 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3276746 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3276746 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3276746' 00:06:48.276 killing process with pid 3276746 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3276746 00:06:48.276 [2024-05-15 04:07:36.286751] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:48.276 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3276746 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.844 04:07:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.752 04:07:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:50.752 00:06:50.752 real 0m19.740s 00:06:50.752 user 0m58.467s 00:06:50.752 sys 0m3.743s 00:06:50.752 04:07:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.752 04:07:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 ************************************ 00:06:50.752 END TEST nvmf_connect_disconnect 00:06:50.752 ************************************ 00:06:50.752 04:07:38 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:50.752 04:07:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:50.752 04:07:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.752 04:07:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 ************************************ 00:06:50.752 START TEST nvmf_multitarget 00:06:50.752 ************************************ 00:06:50.752 04:07:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:50.752 * Looking for test storage... 00:06:50.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.752 04:07:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.752 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:50.752 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.752 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.752 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.752 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.753 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:51.011 04:07:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.011 04:07:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:53.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:53.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:53.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:53.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.544 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:06:53.545 00:06:53.545 --- 10.0.0.2 ping statistics --- 00:06:53.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.545 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:06:53.545 00:06:53.545 --- 10.0.0.1 ping statistics --- 00:06:53.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.545 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3280918 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3280918 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3280918 ']' 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.545 04:07:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:53.545 [2024-05-15 04:07:41.447319] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:06:53.545 [2024-05-15 04:07:41.447403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.545 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.545 [2024-05-15 04:07:41.533012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.803 [2024-05-15 04:07:41.657783] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.803 [2024-05-15 04:07:41.657837] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.803 [2024-05-15 04:07:41.657854] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.803 [2024-05-15 04:07:41.657867] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.803 [2024-05-15 04:07:41.657879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.803 [2024-05-15 04:07:41.657978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.803 [2024-05-15 04:07:41.660953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.803 [2024-05-15 04:07:41.660990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.803 [2024-05-15 04:07:41.660995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:54.735 "nvmf_tgt_1" 00:06:54.735 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:54.992 "nvmf_tgt_2" 00:06:54.992 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:54.993 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:54.993 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:54.993 04:07:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:55.250 true 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:55.250 true 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.250 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.250 rmmod nvme_tcp 00:06:55.508 rmmod nvme_fabrics 00:06:55.508 rmmod nvme_keyring 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3280918 ']' 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3280918 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3280918 ']' 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3280918 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3280918 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3280918' 00:06:55.508 killing process with pid 3280918 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3280918 00:06:55.508 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3280918 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.767 04:07:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.672 04:07:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:57.672 00:06:57.672 real 0m6.958s 00:06:57.672 user 0m9.505s 00:06:57.672 sys 0m2.304s 00:06:57.672 04:07:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.672 04:07:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:57.672 ************************************ 00:06:57.672 END TEST nvmf_multitarget 00:06:57.672 ************************************ 00:06:57.672 04:07:45 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:57.672 04:07:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:57.672 04:07:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.672 04:07:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.931 ************************************ 00:06:57.931 START TEST nvmf_rpc 00:06:57.931 ************************************ 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:57.931 * Looking for test storage... 00:06:57.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.931 04:07:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.542 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:00.543 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:00.543 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:00.543 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:00.543 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:00.543 00:07:00.543 --- 10.0.0.2 ping statistics --- 00:07:00.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.543 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:00.543 00:07:00.543 --- 10.0.0.1 ping statistics --- 00:07:00.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.543 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3283440 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3283440 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3283440 ']' 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.543 04:07:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.543 [2024-05-15 04:07:48.475129] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:07:00.543 [2024-05-15 04:07:48.475210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.800 [2024-05-15 04:07:48.558555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.800 [2024-05-15 04:07:48.681038] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.800 [2024-05-15 04:07:48.681102] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.800 [2024-05-15 04:07:48.681119] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.800 [2024-05-15 04:07:48.681132] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.800 [2024-05-15 04:07:48.681143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.800 [2024-05-15 04:07:48.681203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.801 [2024-05-15 04:07:48.681257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.801 [2024-05-15 04:07:48.681291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.801 [2024-05-15 04:07:48.681293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:01.734 "tick_rate": 2700000000, 00:07:01.734 "poll_groups": [ 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_000", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [] 00:07:01.734 }, 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_001", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [] 00:07:01.734 }, 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_002", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [] 00:07:01.734 }, 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_003", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [] 00:07:01.734 } 00:07:01.734 ] 00:07:01.734 }' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 [2024-05-15 04:07:49.526224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:01.734 "tick_rate": 2700000000, 00:07:01.734 "poll_groups": [ 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_000", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [ 00:07:01.734 { 00:07:01.734 "trtype": "TCP" 00:07:01.734 } 00:07:01.734 ] 00:07:01.734 }, 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_001", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [ 00:07:01.734 { 00:07:01.734 "trtype": "TCP" 00:07:01.734 } 00:07:01.734 ] 00:07:01.734 }, 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_002", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [ 00:07:01.734 { 00:07:01.734 "trtype": "TCP" 00:07:01.734 } 00:07:01.734 ] 00:07:01.734 }, 00:07:01.734 { 00:07:01.734 "name": "nvmf_tgt_poll_group_003", 00:07:01.734 "admin_qpairs": 0, 00:07:01.734 "io_qpairs": 0, 00:07:01.734 "current_admin_qpairs": 0, 00:07:01.734 "current_io_qpairs": 0, 00:07:01.734 "pending_bdev_io": 0, 00:07:01.734 "completed_nvme_io": 0, 00:07:01.734 "transports": [ 00:07:01.734 { 00:07:01.734 "trtype": "TCP" 00:07:01.734 } 00:07:01.734 ] 00:07:01.734 } 00:07:01.734 ] 00:07:01.734 }' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 Malloc1 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.734 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.735 [2024-05-15 04:07:49.683264] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:01.735 [2024-05-15 04:07:49.683555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:01.735 [2024-05-15 04:07:49.706174] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:01.735 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:01.735 could not add new controller: failed to write to nvme-fabrics device 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.735 04:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:02.301 04:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:02.301 04:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:02.301 04:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:02.301 04:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:02.301 04:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:04.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.829 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.830 [2024-05-15 04:07:52.455968] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:04.830 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:04.830 could not add new controller: failed to write to nvme-fabrics device 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.830 04:07:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.088 04:07:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.088 04:07:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:05.088 04:07:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.088 04:07:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:05.088 04:07:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 [2024-05-15 04:07:55.174481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.615 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.874 04:07:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.874 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:07.874 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.874 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:07.874 04:07:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 [2024-05-15 04:07:57.983383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.401 04:07:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 04:07:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.401 04:07:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.659 04:07:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.659 04:07:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:10.659 04:07:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.659 04:07:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:10.659 04:07:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:12.559 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:12.559 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:12.559 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.817 [2024-05-15 04:08:00.677626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.817 04:08:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.382 04:08:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.382 04:08:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:13.382 04:08:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.382 04:08:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:13.382 04:08:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:15.280 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:15.280 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:15.280 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.280 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:15.280 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.280 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:15.280 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 [2024-05-15 04:08:03.401309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.538 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.104 04:08:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.104 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:16.104 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.104 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:16.104 04:08:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:18.003 04:08:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:18.003 04:08:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:18.003 04:08:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.003 04:08:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:18.003 04:08:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.003 04:08:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:18.003 04:08:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.287 [2024-05-15 04:08:06.168765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.287 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.288 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.288 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:18.288 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.288 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.288 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.288 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.854 04:08:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.854 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:18.854 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.854 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:18.854 04:08:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:20.752 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:20.752 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:20.752 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.752 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:20.752 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.752 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:20.752 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.010 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 [2024-05-15 04:08:08.892384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 [2024-05-15 04:08:08.940452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 [2024-05-15 04:08:08.988612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.011 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:21.269 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.269 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.269 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.269 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 [2024-05-15 04:08:09.036760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 [2024-05-15 04:08:09.084943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:21.270 "tick_rate": 2700000000, 00:07:21.270 "poll_groups": [ 00:07:21.270 { 00:07:21.270 "name": "nvmf_tgt_poll_group_000", 00:07:21.270 "admin_qpairs": 2, 00:07:21.270 "io_qpairs": 84, 00:07:21.270 "current_admin_qpairs": 0, 00:07:21.270 "current_io_qpairs": 0, 00:07:21.270 "pending_bdev_io": 0, 00:07:21.270 "completed_nvme_io": 135, 00:07:21.270 "transports": [ 00:07:21.270 { 00:07:21.270 "trtype": "TCP" 00:07:21.270 } 00:07:21.270 ] 00:07:21.270 }, 00:07:21.270 { 00:07:21.270 "name": "nvmf_tgt_poll_group_001", 00:07:21.270 "admin_qpairs": 2, 00:07:21.270 "io_qpairs": 84, 00:07:21.270 "current_admin_qpairs": 0, 00:07:21.270 "current_io_qpairs": 0, 00:07:21.270 "pending_bdev_io": 0, 00:07:21.270 "completed_nvme_io": 276, 00:07:21.270 "transports": [ 00:07:21.270 { 00:07:21.270 "trtype": "TCP" 00:07:21.270 } 00:07:21.270 ] 00:07:21.270 }, 00:07:21.270 { 00:07:21.270 "name": "nvmf_tgt_poll_group_002", 00:07:21.270 "admin_qpairs": 1, 00:07:21.270 "io_qpairs": 84, 00:07:21.270 "current_admin_qpairs": 0, 00:07:21.270 "current_io_qpairs": 0, 00:07:21.270 "pending_bdev_io": 0, 00:07:21.270 "completed_nvme_io": 91, 00:07:21.270 "transports": [ 00:07:21.270 { 00:07:21.270 "trtype": "TCP" 00:07:21.270 } 00:07:21.270 ] 00:07:21.270 }, 00:07:21.270 { 00:07:21.270 "name": "nvmf_tgt_poll_group_003", 00:07:21.270 "admin_qpairs": 2, 00:07:21.270 "io_qpairs": 84, 00:07:21.270 "current_admin_qpairs": 0, 00:07:21.270 "current_io_qpairs": 0, 00:07:21.270 "pending_bdev_io": 0, 00:07:21.270 "completed_nvme_io": 184, 00:07:21.270 "transports": [ 00:07:21.270 { 00:07:21.270 "trtype": "TCP" 00:07:21.270 } 00:07:21.270 ] 00:07:21.270 } 00:07:21.270 ] 00:07:21.270 }' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:21.270 rmmod nvme_tcp 00:07:21.270 rmmod nvme_fabrics 00:07:21.270 rmmod nvme_keyring 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3283440 ']' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3283440 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3283440 ']' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3283440 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.270 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3283440 00:07:21.529 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:21.529 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:21.529 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3283440' 00:07:21.529 killing process with pid 3283440 00:07:21.529 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3283440 00:07:21.529 [2024-05-15 04:08:09.287937] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:21.529 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3283440 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.789 04:08:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.695 04:08:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:23.695 00:07:23.695 real 0m25.944s 00:07:23.695 user 1m22.885s 00:07:23.695 sys 0m4.352s 00:07:23.695 04:08:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.695 04:08:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.695 ************************************ 00:07:23.695 END TEST nvmf_rpc 00:07:23.695 ************************************ 00:07:23.695 04:08:11 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:23.695 04:08:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:23.695 04:08:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.695 04:08:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.695 ************************************ 00:07:23.695 START TEST nvmf_invalid 00:07:23.695 ************************************ 00:07:23.695 04:08:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:23.954 * Looking for test storage... 00:07:23.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.954 04:08:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:26.481 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:26.481 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:26.481 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:26.481 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:26.481 00:07:26.481 --- 10.0.0.2 ping statistics --- 00:07:26.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.481 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:26.481 00:07:26.481 --- 10.0.0.1 ping statistics --- 00:07:26.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.481 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.481 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3288359 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3288359 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3288359 ']' 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.482 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:26.740 [2024-05-15 04:08:14.506284] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:07:26.740 [2024-05-15 04:08:14.506362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.740 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.740 [2024-05-15 04:08:14.581996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.740 [2024-05-15 04:08:14.693665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.740 [2024-05-15 04:08:14.693731] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.740 [2024-05-15 04:08:14.693744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.740 [2024-05-15 04:08:14.693770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.740 [2024-05-15 04:08:14.693779] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.740 [2024-05-15 04:08:14.693859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.740 [2024-05-15 04:08:14.693891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.740 [2024-05-15 04:08:14.693984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.740 [2024-05-15 04:08:14.693988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:26.998 04:08:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23102 00:07:27.255 [2024-05-15 04:08:15.089460] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:27.255 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:27.255 { 00:07:27.255 "nqn": "nqn.2016-06.io.spdk:cnode23102", 00:07:27.255 "tgt_name": "foobar", 00:07:27.255 "method": "nvmf_create_subsystem", 00:07:27.255 "req_id": 1 00:07:27.255 } 00:07:27.255 Got JSON-RPC error response 00:07:27.255 response: 00:07:27.255 { 00:07:27.255 "code": -32603, 00:07:27.255 "message": "Unable to find target foobar" 00:07:27.255 }' 00:07:27.255 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:27.255 { 00:07:27.255 "nqn": "nqn.2016-06.io.spdk:cnode23102", 00:07:27.255 "tgt_name": "foobar", 00:07:27.255 "method": "nvmf_create_subsystem", 00:07:27.255 "req_id": 1 00:07:27.255 } 00:07:27.255 Got JSON-RPC error response 00:07:27.255 response: 00:07:27.255 { 00:07:27.255 "code": -32603, 00:07:27.255 "message": "Unable to find target foobar" 00:07:27.255 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:27.255 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:27.255 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24861 00:07:27.513 [2024-05-15 04:08:15.346367] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24861: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:27.513 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:27.513 { 00:07:27.513 "nqn": "nqn.2016-06.io.spdk:cnode24861", 00:07:27.513 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:27.513 "method": "nvmf_create_subsystem", 00:07:27.513 "req_id": 1 00:07:27.513 } 00:07:27.513 Got JSON-RPC error response 00:07:27.513 response: 00:07:27.513 { 00:07:27.513 "code": -32602, 00:07:27.513 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:27.513 }' 00:07:27.513 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:27.513 { 00:07:27.513 "nqn": "nqn.2016-06.io.spdk:cnode24861", 00:07:27.513 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:27.513 "method": "nvmf_create_subsystem", 00:07:27.513 "req_id": 1 00:07:27.513 } 00:07:27.513 Got JSON-RPC error response 00:07:27.513 response: 00:07:27.513 { 00:07:27.513 "code": -32602, 00:07:27.513 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:27.513 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:27.513 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:27.513 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12037 00:07:27.771 [2024-05-15 04:08:15.583157] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12037: invalid model number 'SPDK_Controller' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:27.771 { 00:07:27.771 "nqn": "nqn.2016-06.io.spdk:cnode12037", 00:07:27.771 "model_number": "SPDK_Controller\u001f", 00:07:27.771 "method": "nvmf_create_subsystem", 00:07:27.771 "req_id": 1 00:07:27.771 } 00:07:27.771 Got JSON-RPC error response 00:07:27.771 response: 00:07:27.771 { 00:07:27.771 "code": -32602, 00:07:27.771 "message": "Invalid MN SPDK_Controller\u001f" 00:07:27.771 }' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:27.771 { 00:07:27.771 "nqn": "nqn.2016-06.io.spdk:cnode12037", 00:07:27.771 "model_number": "SPDK_Controller\u001f", 00:07:27.771 "method": "nvmf_create_subsystem", 00:07:27.771 "req_id": 1 00:07:27.771 } 00:07:27.771 Got JSON-RPC error response 00:07:27.771 response: 00:07:27.771 { 00:07:27.771 "code": -32602, 00:07:27.771 "message": "Invalid MN SPDK_Controller\u001f" 00:07:27.771 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:27.771 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'YE&Rp+]iw[|hj#H(&Zgl&' 00:07:27.772 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'YE&Rp+]iw[|hj#H(&Zgl&' nqn.2016-06.io.spdk:cnode24152 00:07:28.029 [2024-05-15 04:08:15.908321] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24152: invalid serial number 'YE&Rp+]iw[|hj#H(&Zgl&' 00:07:28.029 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:28.029 { 00:07:28.029 "nqn": "nqn.2016-06.io.spdk:cnode24152", 00:07:28.029 "serial_number": "YE&Rp+]iw[|hj#H(&Zgl&", 00:07:28.029 "method": "nvmf_create_subsystem", 00:07:28.029 "req_id": 1 00:07:28.029 } 00:07:28.029 Got JSON-RPC error response 00:07:28.029 response: 00:07:28.029 { 00:07:28.029 "code": -32602, 00:07:28.029 "message": "Invalid SN YE&Rp+]iw[|hj#H(&Zgl&" 00:07:28.029 }' 00:07:28.029 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:28.029 { 00:07:28.029 "nqn": "nqn.2016-06.io.spdk:cnode24152", 00:07:28.029 "serial_number": "YE&Rp+]iw[|hj#H(&Zgl&", 00:07:28.029 "method": "nvmf_create_subsystem", 00:07:28.029 "req_id": 1 00:07:28.029 } 00:07:28.029 Got JSON-RPC error response 00:07:28.029 response: 00:07:28.029 { 00:07:28.029 "code": -32602, 00:07:28.029 "message": "Invalid SN YE&Rp+]iw[|hj#H(&Zgl&" 00:07:28.029 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.030 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '=$Lc2^q}NO"+/Z>pMV#*BQ3%@6NRZZtZD.d$Am`Z\' 00:07:28.288 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '=$Lc2^q}NO"+/Z>pMV#*BQ3%@6NRZZtZD.d$Am`Z\' nqn.2016-06.io.spdk:cnode11953 00:07:28.288 [2024-05-15 04:08:16.301529] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11953: invalid model number '=$Lc2^q}NO"+/Z>pMV#*BQ3%@6NRZZtZD.d$Am`Z\' 00:07:28.545 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:28.545 { 00:07:28.545 "nqn": "nqn.2016-06.io.spdk:cnode11953", 00:07:28.545 "model_number": "=$Lc2^q}NO\"+/Z>pMV#*BQ3%@6NRZZtZD.d$Am`Z\\", 00:07:28.545 "method": "nvmf_create_subsystem", 00:07:28.545 "req_id": 1 00:07:28.545 } 00:07:28.545 Got JSON-RPC error response 00:07:28.545 response: 00:07:28.545 { 00:07:28.545 "code": -32602, 00:07:28.545 "message": "Invalid MN =$Lc2^q}NO\"+/Z>pMV#*BQ3%@6NRZZtZD.d$Am`Z\\" 00:07:28.545 }' 00:07:28.545 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:28.545 { 00:07:28.545 "nqn": "nqn.2016-06.io.spdk:cnode11953", 00:07:28.545 "model_number": "=$Lc2^q}NO\"+/Z>pMV#*BQ3%@6NRZZtZD.d$Am`Z\\", 00:07:28.545 "method": "nvmf_create_subsystem", 00:07:28.545 "req_id": 1 00:07:28.545 } 00:07:28.545 Got JSON-RPC error response 00:07:28.545 response: 00:07:28.546 { 00:07:28.546 "code": -32602, 00:07:28.546 "message": "Invalid MN =$Lc2^q}NO\"+/Z>pMV#*BQ3%@6NRZZtZD.d$Am`Z\\" 00:07:28.546 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:28.546 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:28.546 [2024-05-15 04:08:16.554466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.802 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:29.059 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:29.059 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:29.059 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:29.059 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:29.059 04:08:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:29.059 [2024-05-15 04:08:17.048057] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:29.059 [2024-05-15 04:08:17.048142] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:29.059 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:29.059 { 00:07:29.059 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:29.059 "listen_address": { 00:07:29.059 "trtype": "tcp", 00:07:29.059 "traddr": "", 00:07:29.059 "trsvcid": "4421" 00:07:29.059 }, 00:07:29.059 "method": "nvmf_subsystem_remove_listener", 00:07:29.059 "req_id": 1 00:07:29.059 } 00:07:29.059 Got JSON-RPC error response 00:07:29.059 response: 00:07:29.059 { 00:07:29.059 "code": -32602, 00:07:29.059 "message": "Invalid parameters" 00:07:29.059 }' 00:07:29.059 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:29.059 { 00:07:29.059 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:29.059 "listen_address": { 00:07:29.059 "trtype": "tcp", 00:07:29.059 "traddr": "", 00:07:29.059 "trsvcid": "4421" 00:07:29.059 }, 00:07:29.059 "method": "nvmf_subsystem_remove_listener", 00:07:29.059 "req_id": 1 00:07:29.059 } 00:07:29.059 Got JSON-RPC error response 00:07:29.059 response: 00:07:29.059 { 00:07:29.059 "code": -32602, 00:07:29.059 "message": "Invalid parameters" 00:07:29.059 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:29.059 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6216 -i 0 00:07:29.316 [2024-05-15 04:08:17.300882] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6216: invalid cntlid range [0-65519] 00:07:29.316 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:29.316 { 00:07:29.316 "nqn": "nqn.2016-06.io.spdk:cnode6216", 00:07:29.316 "min_cntlid": 0, 00:07:29.316 "method": "nvmf_create_subsystem", 00:07:29.316 "req_id": 1 00:07:29.316 } 00:07:29.316 Got JSON-RPC error response 00:07:29.316 response: 00:07:29.316 { 00:07:29.316 "code": -32602, 00:07:29.316 "message": "Invalid cntlid range [0-65519]" 00:07:29.316 }' 00:07:29.316 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:29.316 { 00:07:29.316 "nqn": "nqn.2016-06.io.spdk:cnode6216", 00:07:29.316 "min_cntlid": 0, 00:07:29.316 "method": "nvmf_create_subsystem", 00:07:29.316 "req_id": 1 00:07:29.316 } 00:07:29.316 Got JSON-RPC error response 00:07:29.316 response: 00:07:29.316 { 00:07:29.316 "code": -32602, 00:07:29.316 "message": "Invalid cntlid range [0-65519]" 00:07:29.316 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:29.316 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24505 -i 65520 00:07:29.574 [2024-05-15 04:08:17.557734] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24505: invalid cntlid range [65520-65519] 00:07:29.574 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:29.574 { 00:07:29.574 "nqn": "nqn.2016-06.io.spdk:cnode24505", 00:07:29.574 "min_cntlid": 65520, 00:07:29.574 "method": "nvmf_create_subsystem", 00:07:29.574 "req_id": 1 00:07:29.574 } 00:07:29.574 Got JSON-RPC error response 00:07:29.574 response: 00:07:29.574 { 00:07:29.574 "code": -32602, 00:07:29.574 "message": "Invalid cntlid range [65520-65519]" 00:07:29.574 }' 00:07:29.574 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:29.574 { 00:07:29.574 "nqn": "nqn.2016-06.io.spdk:cnode24505", 00:07:29.574 "min_cntlid": 65520, 00:07:29.574 "method": "nvmf_create_subsystem", 00:07:29.574 "req_id": 1 00:07:29.574 } 00:07:29.574 Got JSON-RPC error response 00:07:29.574 response: 00:07:29.574 { 00:07:29.574 "code": -32602, 00:07:29.574 "message": "Invalid cntlid range [65520-65519]" 00:07:29.574 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:29.574 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29399 -I 0 00:07:29.831 [2024-05-15 04:08:17.798599] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29399: invalid cntlid range [1-0] 00:07:29.831 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:29.831 { 00:07:29.831 "nqn": "nqn.2016-06.io.spdk:cnode29399", 00:07:29.831 "max_cntlid": 0, 00:07:29.831 "method": "nvmf_create_subsystem", 00:07:29.831 "req_id": 1 00:07:29.831 } 00:07:29.831 Got JSON-RPC error response 00:07:29.831 response: 00:07:29.832 { 00:07:29.832 "code": -32602, 00:07:29.832 "message": "Invalid cntlid range [1-0]" 00:07:29.832 }' 00:07:29.832 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:29.832 { 00:07:29.832 "nqn": "nqn.2016-06.io.spdk:cnode29399", 00:07:29.832 "max_cntlid": 0, 00:07:29.832 "method": "nvmf_create_subsystem", 00:07:29.832 "req_id": 1 00:07:29.832 } 00:07:29.832 Got JSON-RPC error response 00:07:29.832 response: 00:07:29.832 { 00:07:29.832 "code": -32602, 00:07:29.832 "message": "Invalid cntlid range [1-0]" 00:07:29.832 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:29.832 04:08:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14305 -I 65520 00:07:30.089 [2024-05-15 04:08:18.039343] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14305: invalid cntlid range [1-65520] 00:07:30.089 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:30.089 { 00:07:30.089 "nqn": "nqn.2016-06.io.spdk:cnode14305", 00:07:30.089 "max_cntlid": 65520, 00:07:30.089 "method": "nvmf_create_subsystem", 00:07:30.089 "req_id": 1 00:07:30.089 } 00:07:30.089 Got JSON-RPC error response 00:07:30.089 response: 00:07:30.089 { 00:07:30.089 "code": -32602, 00:07:30.089 "message": "Invalid cntlid range [1-65520]" 00:07:30.089 }' 00:07:30.089 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:30.089 { 00:07:30.089 "nqn": "nqn.2016-06.io.spdk:cnode14305", 00:07:30.089 "max_cntlid": 65520, 00:07:30.089 "method": "nvmf_create_subsystem", 00:07:30.089 "req_id": 1 00:07:30.089 } 00:07:30.089 Got JSON-RPC error response 00:07:30.089 response: 00:07:30.089 { 00:07:30.089 "code": -32602, 00:07:30.089 "message": "Invalid cntlid range [1-65520]" 00:07:30.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:30.089 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22532 -i 6 -I 5 00:07:30.347 [2024-05-15 04:08:18.276153] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22532: invalid cntlid range [6-5] 00:07:30.347 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:30.347 { 00:07:30.347 "nqn": "nqn.2016-06.io.spdk:cnode22532", 00:07:30.347 "min_cntlid": 6, 00:07:30.347 "max_cntlid": 5, 00:07:30.347 "method": "nvmf_create_subsystem", 00:07:30.347 "req_id": 1 00:07:30.347 } 00:07:30.347 Got JSON-RPC error response 00:07:30.347 response: 00:07:30.347 { 00:07:30.347 "code": -32602, 00:07:30.347 "message": "Invalid cntlid range [6-5]" 00:07:30.347 }' 00:07:30.347 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:30.347 { 00:07:30.347 "nqn": "nqn.2016-06.io.spdk:cnode22532", 00:07:30.347 "min_cntlid": 6, 00:07:30.347 "max_cntlid": 5, 00:07:30.347 "method": "nvmf_create_subsystem", 00:07:30.347 "req_id": 1 00:07:30.347 } 00:07:30.347 Got JSON-RPC error response 00:07:30.347 response: 00:07:30.347 { 00:07:30.347 "code": -32602, 00:07:30.347 "message": "Invalid cntlid range [6-5]" 00:07:30.347 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:30.347 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:30.605 { 00:07:30.605 "name": "foobar", 00:07:30.605 "method": "nvmf_delete_target", 00:07:30.605 "req_id": 1 00:07:30.605 } 00:07:30.605 Got JSON-RPC error response 00:07:30.605 response: 00:07:30.605 { 00:07:30.605 "code": -32602, 00:07:30.605 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:30.605 }' 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:30.605 { 00:07:30.605 "name": "foobar", 00:07:30.605 "method": "nvmf_delete_target", 00:07:30.605 "req_id": 1 00:07:30.605 } 00:07:30.605 Got JSON-RPC error response 00:07:30.605 response: 00:07:30.605 { 00:07:30.605 "code": -32602, 00:07:30.605 "message": "The specified target doesn't exist, cannot delete it." 00:07:30.605 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.605 rmmod nvme_tcp 00:07:30.605 rmmod nvme_fabrics 00:07:30.605 rmmod nvme_keyring 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3288359 ']' 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3288359 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3288359 ']' 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3288359 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3288359 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3288359' 00:07:30.605 killing process with pid 3288359 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3288359 00:07:30.605 [2024-05-15 04:08:18.489247] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:30.605 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3288359 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.863 04:08:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.396 04:08:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.396 00:07:33.396 real 0m9.101s 00:07:33.396 user 0m19.862s 00:07:33.396 sys 0m2.780s 00:07:33.396 04:08:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.396 04:08:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:33.396 ************************************ 00:07:33.396 END TEST nvmf_invalid 00:07:33.396 ************************************ 00:07:33.396 04:08:20 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:33.396 04:08:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:33.396 04:08:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.396 04:08:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.396 ************************************ 00:07:33.396 START TEST nvmf_abort 00:07:33.396 ************************************ 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:33.396 * Looking for test storage... 00:07:33.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.396 04:08:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:35.929 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:35.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:35.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:35.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:35.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:07:35.930 00:07:35.930 --- 10.0.0.2 ping statistics --- 00:07:35.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.930 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:07:35.930 00:07:35.930 --- 10.0.0.1 ping statistics --- 00:07:35.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.930 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3291292 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3291292 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3291292 ']' 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.930 04:08:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.930 [2024-05-15 04:08:23.630959] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:07:35.930 [2024-05-15 04:08:23.631039] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.930 [2024-05-15 04:08:23.708055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.930 [2024-05-15 04:08:23.822289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.930 [2024-05-15 04:08:23.822338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.930 [2024-05-15 04:08:23.822366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.930 [2024-05-15 04:08:23.822378] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.930 [2024-05-15 04:08:23.822388] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.930 [2024-05-15 04:08:23.822447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.930 [2024-05-15 04:08:23.822510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.930 [2024-05-15 04:08:23.822512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 [2024-05-15 04:08:24.663843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 Malloc0 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 Delay0 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 [2024-05-15 04:08:24.734862] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:36.864 [2024-05-15 04:08:24.735192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.864 04:08:24 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:36.864 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.864 [2024-05-15 04:08:24.842624] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:39.395 Initializing NVMe Controllers 00:07:39.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:39.395 controller IO queue size 128 less than required 00:07:39.395 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:39.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:39.395 Initialization complete. Launching workers. 00:07:39.395 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31859 00:07:39.395 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31920, failed to submit 62 00:07:39.395 success 31863, unsuccess 57, failed 0 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.395 rmmod nvme_tcp 00:07:39.395 rmmod nvme_fabrics 00:07:39.395 rmmod nvme_keyring 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3291292 ']' 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3291292 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3291292 ']' 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3291292 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3291292 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3291292' 00:07:39.395 killing process with pid 3291292 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3291292 00:07:39.395 [2024-05-15 04:08:27.146093] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:39.395 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3291292 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.653 04:08:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.561 04:08:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.561 00:07:41.561 real 0m8.641s 00:07:41.561 user 0m13.403s 00:07:41.561 sys 0m3.102s 00:07:41.561 04:08:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.561 04:08:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.561 ************************************ 00:07:41.561 END TEST nvmf_abort 00:07:41.561 ************************************ 00:07:41.561 04:08:29 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:41.561 04:08:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:41.561 04:08:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.561 04:08:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.561 ************************************ 00:07:41.561 START TEST nvmf_ns_hotplug_stress 00:07:41.561 ************************************ 00:07:41.561 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:41.820 * Looking for test storage... 00:07:41.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.820 04:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:44.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:44.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:44.352 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:44.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:07:44.352 00:07:44.352 --- 10.0.0.2 ping statistics --- 00:07:44.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.352 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:07:44.352 00:07:44.352 --- 10.0.0.1 ping statistics --- 00:07:44.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.352 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.352 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3293973 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3293973 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3293973 ']' 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.353 04:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:44.353 [2024-05-15 04:08:32.344736] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:07:44.353 [2024-05-15 04:08:32.344815] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.611 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.611 [2024-05-15 04:08:32.426925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.611 [2024-05-15 04:08:32.546838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.611 [2024-05-15 04:08:32.546909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.611 [2024-05-15 04:08:32.546925] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.611 [2024-05-15 04:08:32.546948] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.611 [2024-05-15 04:08:32.546960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.611 [2024-05-15 04:08:32.547045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.611 [2024-05-15 04:08:32.547111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.611 [2024-05-15 04:08:32.547115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:45.544 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:45.803 [2024-05-15 04:08:33.610901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.803 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:46.062 04:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.321 [2024-05-15 04:08:34.149621] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:46.321 [2024-05-15 04:08:34.149860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.321 04:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.614 04:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:46.872 Malloc0 00:07:46.872 04:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:47.130 Delay0 00:07:47.130 04:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.387 04:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:47.387 NULL1 00:07:47.645 04:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:47.645 04:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3294398 00:07:47.645 04:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:47.645 04:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:47.645 04:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.903 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.903 Read completed with error (sct=0, sc=11) 00:07:47.903 04:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.159 04:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:48.159 04:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:48.416 true 00:07:48.416 04:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:48.416 04:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.348 04:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.606 04:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:49.606 04:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:49.606 true 00:07:49.863 04:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:49.863 04:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.863 04:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.121 04:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:50.121 04:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:50.379 true 00:07:50.379 04:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:50.379 04:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.314 04:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.572 04:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:51.572 04:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:51.833 true 00:07:51.833 04:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:51.833 04:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.091 04:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.349 04:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:52.349 04:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:52.607 true 00:07:52.607 04:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:52.607 04:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.540 04:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.798 04:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:53.798 04:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:53.798 true 00:07:54.056 04:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:54.056 04:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.056 04:08:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.314 04:08:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:54.314 04:08:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:54.572 true 00:07:54.572 04:08:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:54.572 04:08:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.505 04:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.763 04:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:55.763 04:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:56.020 true 00:07:56.020 04:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:56.020 04:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.278 04:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.536 04:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:56.536 04:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:56.793 true 00:07:56.793 04:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:56.793 04:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.726 04:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.984 04:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:57.984 04:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:58.241 true 00:07:58.241 04:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:58.241 04:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.499 04:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.757 04:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:58.757 04:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:59.013 true 00:07:59.013 04:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:07:59.013 04:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.949 04:08:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.207 04:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:00.207 04:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:00.464 true 00:08:00.464 04:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:00.464 04:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.721 04:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.979 04:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:00.979 04:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:01.236 true 00:08:01.236 04:08:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:01.236 04:08:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.166 04:08:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.423 04:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:02.423 04:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:02.680 true 00:08:02.680 04:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:02.680 04:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.937 04:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.193 04:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:03.193 04:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:03.193 true 00:08:03.193 04:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:03.193 04:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.380 04:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.637 04:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:04.638 04:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:04.895 true 00:08:04.895 04:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:04.895 04:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.153 04:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.412 04:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:05.412 04:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:05.668 true 00:08:05.668 04:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:05.668 04:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.598 04:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.598 04:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:06.598 04:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:07.162 true 00:08:07.162 04:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:07.162 04:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.162 04:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.419 04:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:07.419 04:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:07.676 true 00:08:07.676 04:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:07.676 04:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.931 04:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.188 04:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:08.188 04:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:08.448 true 00:08:08.448 04:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:08.448 04:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.386 04:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.899 04:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:09.899 04:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:09.899 true 00:08:09.899 04:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:10.156 04:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.156 04:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.414 04:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:10.414 04:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:10.671 true 00:08:10.671 04:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:10.671 04:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.603 04:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.861 04:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:11.861 04:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:12.119 true 00:08:12.119 04:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:12.119 04:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.377 04:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.635 04:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:12.635 04:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:12.905 true 00:08:12.905 04:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:12.905 04:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.865 04:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.122 04:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:14.122 04:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:14.379 true 00:08:14.380 04:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:14.380 04:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.637 04:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.895 04:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:14.895 04:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:15.153 true 00:08:15.153 04:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:15.153 04:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.087 04:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.344 04:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:16.344 04:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:16.602 true 00:08:16.602 04:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:16.602 04:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.859 04:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.116 04:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:17.116 04:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:17.116 true 00:08:17.373 04:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:17.373 04:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.304 Initializing NVMe Controllers 00:08:18.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:18.304 Controller IO queue size 128, less than required. 00:08:18.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.304 Controller IO queue size 128, less than required. 00:08:18.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:18.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:18.304 Initialization complete. Launching workers. 00:08:18.304 ======================================================== 00:08:18.305 Latency(us) 00:08:18.305 Device Information : IOPS MiB/s Average min max 00:08:18.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 925.07 0.45 72727.82 2024.90 1036673.48 00:08:18.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11037.93 5.39 11597.04 2586.43 368596.50 00:08:18.305 ======================================================== 00:08:18.305 Total : 11963.00 5.84 16324.12 2024.90 1036673.48 00:08:18.305 00:08:18.305 04:09:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.305 04:09:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:18.305 04:09:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:18.562 true 00:08:18.562 04:09:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3294398 00:08:18.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3294398) - No such process 00:08:18.562 04:09:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3294398 00:08:18.562 04:09:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.819 04:09:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.077 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:19.077 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:19.077 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:19.077 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.077 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:19.335 null0 00:08:19.335 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.335 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.335 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:19.592 null1 00:08:19.592 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.592 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.592 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:19.850 null2 00:08:19.850 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.850 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.850 04:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:20.108 null3 00:08:20.108 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.108 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.108 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:20.366 null4 00:08:20.366 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.366 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.366 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:20.623 null5 00:08:20.624 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.624 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.624 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:20.881 null6 00:08:20.881 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.881 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.881 04:09:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:21.140 null7 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3299086 3299088 3299091 3299095 3299098 3299101 3299105 3299108 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.140 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.399 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.399 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.399 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.657 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.657 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.657 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.657 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.658 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.658 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.658 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.658 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.916 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.174 04:09:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.433 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.692 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.950 04:09:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.208 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.466 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.467 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.726 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.984 04:09:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.243 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.501 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.760 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.018 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.019 04:09:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.304 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.581 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.841 04:09:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.100 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.358 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.358 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.358 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.358 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.358 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.358 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.358 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.616 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.875 rmmod nvme_tcp 00:08:26.875 rmmod nvme_fabrics 00:08:26.875 rmmod nvme_keyring 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3293973 ']' 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3293973 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3293973 ']' 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3293973 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3293973 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3293973' 00:08:26.875 killing process with pid 3293973 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3293973 00:08:26.875 [2024-05-15 04:09:14.722872] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:26.875 04:09:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3293973 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.135 04:09:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.044 04:09:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.044 00:08:29.044 real 0m47.498s 00:08:29.044 user 3m33.280s 00:08:29.044 sys 0m16.754s 00:08:29.044 04:09:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.044 04:09:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.044 ************************************ 00:08:29.044 END TEST nvmf_ns_hotplug_stress 00:08:29.044 ************************************ 00:08:29.304 04:09:17 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:29.304 04:09:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:29.304 04:09:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.304 04:09:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.304 ************************************ 00:08:29.304 START TEST nvmf_connect_stress 00:08:29.304 ************************************ 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:29.304 * Looking for test storage... 00:08:29.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.304 04:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.836 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:31.837 00:08:31.837 --- 10.0.0.2 ping statistics --- 00:08:31.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.837 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:08:31.837 00:08:31.837 --- 10.0.0.1 ping statistics --- 00:08:31.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.837 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3302225 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3302225 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3302225 ']' 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:31.837 04:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.837 [2024-05-15 04:09:19.705345] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:08:31.837 [2024-05-15 04:09:19.705426] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.837 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.837 [2024-05-15 04:09:19.781526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.094 [2024-05-15 04:09:19.898557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.094 [2024-05-15 04:09:19.898613] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.094 [2024-05-15 04:09:19.898629] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.095 [2024-05-15 04:09:19.898642] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.095 [2024-05-15 04:09:19.898653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.095 [2024-05-15 04:09:19.898715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.095 [2024-05-15 04:09:19.898832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.095 [2024-05-15 04:09:19.898835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.658 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:32.659 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:08:32.659 04:09:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.659 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.659 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 [2024-05-15 04:09:20.680030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 [2024-05-15 04:09:20.697435] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:32.916 [2024-05-15 04:09:20.710074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 NULL1 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3302382 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.916 04:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.173 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.173 04:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:33.173 04:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.173 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.173 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.430 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.430 04:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:33.430 04:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.430 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.430 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.992 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.992 04:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:33.992 04:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.992 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.992 04:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.249 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.249 04:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:34.249 04:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.249 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.249 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.505 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.505 04:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:34.505 04:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.505 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.505 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.762 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.762 04:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:34.762 04:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.762 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.762 04:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.019 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.019 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:35.019 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.019 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.019 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.587 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.587 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:35.587 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.587 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.587 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.848 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.848 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:35.848 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.848 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.848 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.106 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.106 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:36.106 04:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.106 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.106 04:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.364 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.364 04:09:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:36.364 04:09:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.364 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.364 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.621 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.621 04:09:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:36.621 04:09:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.621 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.621 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.186 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.186 04:09:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:37.186 04:09:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.186 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.186 04:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.444 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.444 04:09:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:37.444 04:09:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.444 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.444 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.702 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.702 04:09:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:37.702 04:09:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.702 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.702 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.960 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.960 04:09:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:37.960 04:09:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.960 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.960 04:09:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.218 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.218 04:09:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:38.218 04:09:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.218 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.218 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.784 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.784 04:09:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:38.784 04:09:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.784 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.784 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.042 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.042 04:09:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:39.042 04:09:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.042 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.042 04:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.300 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.300 04:09:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:39.300 04:09:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.300 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.300 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.557 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.557 04:09:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:39.557 04:09:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.557 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.557 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.845 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.845 04:09:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:39.845 04:09:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.845 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.845 04:09:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.411 04:09:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:40.411 04:09:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.411 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.411 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.669 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.669 04:09:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:40.669 04:09:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.669 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.669 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.927 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.927 04:09:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:40.927 04:09:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.927 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.927 04:09:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.185 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.185 04:09:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:41.185 04:09:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.185 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.185 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.443 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.443 04:09:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:41.443 04:09:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.443 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.443 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.009 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.009 04:09:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:42.009 04:09:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.009 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.009 04:09:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.267 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.267 04:09:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:42.267 04:09:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.267 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.267 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.524 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.524 04:09:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:42.524 04:09:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.524 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.524 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.781 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.781 04:09:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:42.781 04:09:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.781 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.781 04:09:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.038 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3302382 00:08:43.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3302382) - No such process 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3302382 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.038 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.038 rmmod nvme_tcp 00:08:43.296 rmmod nvme_fabrics 00:08:43.296 rmmod nvme_keyring 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3302225 ']' 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3302225 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3302225 ']' 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3302225 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3302225 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3302225' 00:08:43.296 killing process with pid 3302225 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3302225 00:08:43.296 [2024-05-15 04:09:31.128517] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:43.296 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3302225 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.555 04:09:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.460 04:09:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.460 00:08:45.460 real 0m16.347s 00:08:45.460 user 0m40.161s 00:08:45.460 sys 0m6.441s 00:08:45.460 04:09:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:45.460 04:09:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.460 ************************************ 00:08:45.460 END TEST nvmf_connect_stress 00:08:45.460 ************************************ 00:08:45.719 04:09:33 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:45.719 04:09:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:45.719 04:09:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:45.719 04:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.719 ************************************ 00:08:45.719 START TEST nvmf_fused_ordering 00:08:45.719 ************************************ 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:45.719 * Looking for test storage... 00:08:45.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.719 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.720 04:09:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.269 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.270 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.270 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.270 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.270 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:08:48.270 00:08:48.270 --- 10.0.0.2 ping statistics --- 00:08:48.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.270 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:48.270 00:08:48.270 --- 10.0.0.1 ping statistics --- 00:08:48.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.270 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3305824 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3305824 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3305824 ']' 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:48.270 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.270 [2024-05-15 04:09:36.229753] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:08:48.270 [2024-05-15 04:09:36.229823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.270 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.529 [2024-05-15 04:09:36.305388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.529 [2024-05-15 04:09:36.417082] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.529 [2024-05-15 04:09:36.417139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.529 [2024-05-15 04:09:36.417167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.529 [2024-05-15 04:09:36.417179] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.529 [2024-05-15 04:09:36.417189] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.529 [2024-05-15 04:09:36.417242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.529 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:48.529 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:08:48.529 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.529 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.529 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 [2024-05-15 04:09:36.567591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 [2024-05-15 04:09:36.583533] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:48.787 [2024-05-15 04:09:36.583790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 NULL1 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.787 04:09:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:48.787 [2024-05-15 04:09:36.628553] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:08:48.787 [2024-05-15 04:09:36.628596] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3305963 ] 00:08:48.788 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.721 Attached to nqn.2016-06.io.spdk:cnode1 00:08:49.721 Namespace ID: 1 size: 1GB 00:08:49.721 fused_ordering(0) 00:08:49.721 fused_ordering(1) 00:08:49.721 fused_ordering(2) 00:08:49.721 fused_ordering(3) 00:08:49.721 fused_ordering(4) 00:08:49.721 fused_ordering(5) 00:08:49.721 fused_ordering(6) 00:08:49.721 fused_ordering(7) 00:08:49.721 fused_ordering(8) 00:08:49.721 fused_ordering(9) 00:08:49.721 fused_ordering(10) 00:08:49.721 fused_ordering(11) 00:08:49.721 fused_ordering(12) 00:08:49.721 fused_ordering(13) 00:08:49.721 fused_ordering(14) 00:08:49.721 fused_ordering(15) 00:08:49.721 fused_ordering(16) 00:08:49.721 fused_ordering(17) 00:08:49.721 fused_ordering(18) 00:08:49.721 fused_ordering(19) 00:08:49.721 fused_ordering(20) 00:08:49.721 fused_ordering(21) 00:08:49.721 fused_ordering(22) 00:08:49.721 fused_ordering(23) 00:08:49.721 fused_ordering(24) 00:08:49.721 fused_ordering(25) 00:08:49.721 fused_ordering(26) 00:08:49.721 fused_ordering(27) 00:08:49.721 fused_ordering(28) 00:08:49.721 fused_ordering(29) 00:08:49.721 fused_ordering(30) 00:08:49.721 fused_ordering(31) 00:08:49.721 fused_ordering(32) 00:08:49.721 fused_ordering(33) 00:08:49.721 fused_ordering(34) 00:08:49.721 fused_ordering(35) 00:08:49.721 fused_ordering(36) 00:08:49.721 fused_ordering(37) 00:08:49.721 fused_ordering(38) 00:08:49.721 fused_ordering(39) 00:08:49.721 fused_ordering(40) 00:08:49.721 fused_ordering(41) 00:08:49.721 fused_ordering(42) 00:08:49.721 fused_ordering(43) 00:08:49.721 fused_ordering(44) 00:08:49.721 fused_ordering(45) 00:08:49.721 fused_ordering(46) 00:08:49.721 fused_ordering(47) 00:08:49.721 fused_ordering(48) 00:08:49.721 fused_ordering(49) 00:08:49.721 fused_ordering(50) 00:08:49.721 fused_ordering(51) 00:08:49.721 fused_ordering(52) 00:08:49.721 fused_ordering(53) 00:08:49.721 fused_ordering(54) 00:08:49.721 fused_ordering(55) 00:08:49.721 fused_ordering(56) 00:08:49.721 fused_ordering(57) 00:08:49.721 fused_ordering(58) 00:08:49.721 fused_ordering(59) 00:08:49.721 fused_ordering(60) 00:08:49.721 fused_ordering(61) 00:08:49.721 fused_ordering(62) 00:08:49.721 fused_ordering(63) 00:08:49.721 fused_ordering(64) 00:08:49.721 fused_ordering(65) 00:08:49.721 fused_ordering(66) 00:08:49.721 fused_ordering(67) 00:08:49.721 fused_ordering(68) 00:08:49.721 fused_ordering(69) 00:08:49.721 fused_ordering(70) 00:08:49.721 fused_ordering(71) 00:08:49.721 fused_ordering(72) 00:08:49.721 fused_ordering(73) 00:08:49.721 fused_ordering(74) 00:08:49.721 fused_ordering(75) 00:08:49.721 fused_ordering(76) 00:08:49.721 fused_ordering(77) 00:08:49.721 fused_ordering(78) 00:08:49.721 fused_ordering(79) 00:08:49.721 fused_ordering(80) 00:08:49.721 fused_ordering(81) 00:08:49.721 fused_ordering(82) 00:08:49.721 fused_ordering(83) 00:08:49.721 fused_ordering(84) 00:08:49.721 fused_ordering(85) 00:08:49.721 fused_ordering(86) 00:08:49.721 fused_ordering(87) 00:08:49.721 fused_ordering(88) 00:08:49.721 fused_ordering(89) 00:08:49.721 fused_ordering(90) 00:08:49.721 fused_ordering(91) 00:08:49.721 fused_ordering(92) 00:08:49.721 fused_ordering(93) 00:08:49.721 fused_ordering(94) 00:08:49.721 fused_ordering(95) 00:08:49.721 fused_ordering(96) 00:08:49.721 fused_ordering(97) 00:08:49.721 fused_ordering(98) 00:08:49.721 fused_ordering(99) 00:08:49.721 fused_ordering(100) 00:08:49.721 fused_ordering(101) 00:08:49.721 fused_ordering(102) 00:08:49.721 fused_ordering(103) 00:08:49.721 fused_ordering(104) 00:08:49.721 fused_ordering(105) 00:08:49.721 fused_ordering(106) 00:08:49.721 fused_ordering(107) 00:08:49.721 fused_ordering(108) 00:08:49.721 fused_ordering(109) 00:08:49.721 fused_ordering(110) 00:08:49.721 fused_ordering(111) 00:08:49.721 fused_ordering(112) 00:08:49.721 fused_ordering(113) 00:08:49.721 fused_ordering(114) 00:08:49.721 fused_ordering(115) 00:08:49.721 fused_ordering(116) 00:08:49.721 fused_ordering(117) 00:08:49.721 fused_ordering(118) 00:08:49.721 fused_ordering(119) 00:08:49.721 fused_ordering(120) 00:08:49.721 fused_ordering(121) 00:08:49.721 fused_ordering(122) 00:08:49.721 fused_ordering(123) 00:08:49.721 fused_ordering(124) 00:08:49.721 fused_ordering(125) 00:08:49.721 fused_ordering(126) 00:08:49.721 fused_ordering(127) 00:08:49.721 fused_ordering(128) 00:08:49.721 fused_ordering(129) 00:08:49.721 fused_ordering(130) 00:08:49.721 fused_ordering(131) 00:08:49.721 fused_ordering(132) 00:08:49.721 fused_ordering(133) 00:08:49.721 fused_ordering(134) 00:08:49.721 fused_ordering(135) 00:08:49.721 fused_ordering(136) 00:08:49.721 fused_ordering(137) 00:08:49.721 fused_ordering(138) 00:08:49.721 fused_ordering(139) 00:08:49.721 fused_ordering(140) 00:08:49.721 fused_ordering(141) 00:08:49.721 fused_ordering(142) 00:08:49.722 fused_ordering(143) 00:08:49.722 fused_ordering(144) 00:08:49.722 fused_ordering(145) 00:08:49.722 fused_ordering(146) 00:08:49.722 fused_ordering(147) 00:08:49.722 fused_ordering(148) 00:08:49.722 fused_ordering(149) 00:08:49.722 fused_ordering(150) 00:08:49.722 fused_ordering(151) 00:08:49.722 fused_ordering(152) 00:08:49.722 fused_ordering(153) 00:08:49.722 fused_ordering(154) 00:08:49.722 fused_ordering(155) 00:08:49.722 fused_ordering(156) 00:08:49.722 fused_ordering(157) 00:08:49.722 fused_ordering(158) 00:08:49.722 fused_ordering(159) 00:08:49.722 fused_ordering(160) 00:08:49.722 fused_ordering(161) 00:08:49.722 fused_ordering(162) 00:08:49.722 fused_ordering(163) 00:08:49.722 fused_ordering(164) 00:08:49.722 fused_ordering(165) 00:08:49.722 fused_ordering(166) 00:08:49.722 fused_ordering(167) 00:08:49.722 fused_ordering(168) 00:08:49.722 fused_ordering(169) 00:08:49.722 fused_ordering(170) 00:08:49.722 fused_ordering(171) 00:08:49.722 fused_ordering(172) 00:08:49.722 fused_ordering(173) 00:08:49.722 fused_ordering(174) 00:08:49.722 fused_ordering(175) 00:08:49.722 fused_ordering(176) 00:08:49.722 fused_ordering(177) 00:08:49.722 fused_ordering(178) 00:08:49.722 fused_ordering(179) 00:08:49.722 fused_ordering(180) 00:08:49.722 fused_ordering(181) 00:08:49.722 fused_ordering(182) 00:08:49.722 fused_ordering(183) 00:08:49.722 fused_ordering(184) 00:08:49.722 fused_ordering(185) 00:08:49.722 fused_ordering(186) 00:08:49.722 fused_ordering(187) 00:08:49.722 fused_ordering(188) 00:08:49.722 fused_ordering(189) 00:08:49.722 fused_ordering(190) 00:08:49.722 fused_ordering(191) 00:08:49.722 fused_ordering(192) 00:08:49.722 fused_ordering(193) 00:08:49.722 fused_ordering(194) 00:08:49.722 fused_ordering(195) 00:08:49.722 fused_ordering(196) 00:08:49.722 fused_ordering(197) 00:08:49.722 fused_ordering(198) 00:08:49.722 fused_ordering(199) 00:08:49.722 fused_ordering(200) 00:08:49.722 fused_ordering(201) 00:08:49.722 fused_ordering(202) 00:08:49.722 fused_ordering(203) 00:08:49.722 fused_ordering(204) 00:08:49.722 fused_ordering(205) 00:08:50.305 fused_ordering(206) 00:08:50.305 fused_ordering(207) 00:08:50.305 fused_ordering(208) 00:08:50.305 fused_ordering(209) 00:08:50.305 fused_ordering(210) 00:08:50.305 fused_ordering(211) 00:08:50.306 fused_ordering(212) 00:08:50.306 fused_ordering(213) 00:08:50.306 fused_ordering(214) 00:08:50.306 fused_ordering(215) 00:08:50.306 fused_ordering(216) 00:08:50.306 fused_ordering(217) 00:08:50.306 fused_ordering(218) 00:08:50.306 fused_ordering(219) 00:08:50.306 fused_ordering(220) 00:08:50.306 fused_ordering(221) 00:08:50.306 fused_ordering(222) 00:08:50.306 fused_ordering(223) 00:08:50.306 fused_ordering(224) 00:08:50.306 fused_ordering(225) 00:08:50.306 fused_ordering(226) 00:08:50.306 fused_ordering(227) 00:08:50.306 fused_ordering(228) 00:08:50.306 fused_ordering(229) 00:08:50.306 fused_ordering(230) 00:08:50.306 fused_ordering(231) 00:08:50.306 fused_ordering(232) 00:08:50.306 fused_ordering(233) 00:08:50.306 fused_ordering(234) 00:08:50.306 fused_ordering(235) 00:08:50.306 fused_ordering(236) 00:08:50.306 fused_ordering(237) 00:08:50.306 fused_ordering(238) 00:08:50.306 fused_ordering(239) 00:08:50.306 fused_ordering(240) 00:08:50.306 fused_ordering(241) 00:08:50.306 fused_ordering(242) 00:08:50.306 fused_ordering(243) 00:08:50.306 fused_ordering(244) 00:08:50.306 fused_ordering(245) 00:08:50.306 fused_ordering(246) 00:08:50.306 fused_ordering(247) 00:08:50.306 fused_ordering(248) 00:08:50.306 fused_ordering(249) 00:08:50.306 fused_ordering(250) 00:08:50.306 fused_ordering(251) 00:08:50.306 fused_ordering(252) 00:08:50.306 fused_ordering(253) 00:08:50.306 fused_ordering(254) 00:08:50.306 fused_ordering(255) 00:08:50.306 fused_ordering(256) 00:08:50.306 fused_ordering(257) 00:08:50.306 fused_ordering(258) 00:08:50.306 fused_ordering(259) 00:08:50.306 fused_ordering(260) 00:08:50.306 fused_ordering(261) 00:08:50.306 fused_ordering(262) 00:08:50.306 fused_ordering(263) 00:08:50.306 fused_ordering(264) 00:08:50.306 fused_ordering(265) 00:08:50.306 fused_ordering(266) 00:08:50.306 fused_ordering(267) 00:08:50.306 fused_ordering(268) 00:08:50.306 fused_ordering(269) 00:08:50.306 fused_ordering(270) 00:08:50.306 fused_ordering(271) 00:08:50.306 fused_ordering(272) 00:08:50.306 fused_ordering(273) 00:08:50.306 fused_ordering(274) 00:08:50.306 fused_ordering(275) 00:08:50.306 fused_ordering(276) 00:08:50.306 fused_ordering(277) 00:08:50.306 fused_ordering(278) 00:08:50.306 fused_ordering(279) 00:08:50.306 fused_ordering(280) 00:08:50.306 fused_ordering(281) 00:08:50.306 fused_ordering(282) 00:08:50.306 fused_ordering(283) 00:08:50.306 fused_ordering(284) 00:08:50.306 fused_ordering(285) 00:08:50.306 fused_ordering(286) 00:08:50.306 fused_ordering(287) 00:08:50.306 fused_ordering(288) 00:08:50.306 fused_ordering(289) 00:08:50.306 fused_ordering(290) 00:08:50.306 fused_ordering(291) 00:08:50.306 fused_ordering(292) 00:08:50.306 fused_ordering(293) 00:08:50.306 fused_ordering(294) 00:08:50.306 fused_ordering(295) 00:08:50.306 fused_ordering(296) 00:08:50.306 fused_ordering(297) 00:08:50.306 fused_ordering(298) 00:08:50.306 fused_ordering(299) 00:08:50.306 fused_ordering(300) 00:08:50.306 fused_ordering(301) 00:08:50.306 fused_ordering(302) 00:08:50.306 fused_ordering(303) 00:08:50.306 fused_ordering(304) 00:08:50.306 fused_ordering(305) 00:08:50.306 fused_ordering(306) 00:08:50.306 fused_ordering(307) 00:08:50.306 fused_ordering(308) 00:08:50.306 fused_ordering(309) 00:08:50.306 fused_ordering(310) 00:08:50.306 fused_ordering(311) 00:08:50.306 fused_ordering(312) 00:08:50.306 fused_ordering(313) 00:08:50.306 fused_ordering(314) 00:08:50.306 fused_ordering(315) 00:08:50.306 fused_ordering(316) 00:08:50.306 fused_ordering(317) 00:08:50.306 fused_ordering(318) 00:08:50.306 fused_ordering(319) 00:08:50.306 fused_ordering(320) 00:08:50.306 fused_ordering(321) 00:08:50.306 fused_ordering(322) 00:08:50.306 fused_ordering(323) 00:08:50.306 fused_ordering(324) 00:08:50.306 fused_ordering(325) 00:08:50.306 fused_ordering(326) 00:08:50.306 fused_ordering(327) 00:08:50.306 fused_ordering(328) 00:08:50.306 fused_ordering(329) 00:08:50.306 fused_ordering(330) 00:08:50.306 fused_ordering(331) 00:08:50.306 fused_ordering(332) 00:08:50.306 fused_ordering(333) 00:08:50.306 fused_ordering(334) 00:08:50.306 fused_ordering(335) 00:08:50.306 fused_ordering(336) 00:08:50.306 fused_ordering(337) 00:08:50.306 fused_ordering(338) 00:08:50.306 fused_ordering(339) 00:08:50.306 fused_ordering(340) 00:08:50.306 fused_ordering(341) 00:08:50.306 fused_ordering(342) 00:08:50.306 fused_ordering(343) 00:08:50.306 fused_ordering(344) 00:08:50.306 fused_ordering(345) 00:08:50.306 fused_ordering(346) 00:08:50.306 fused_ordering(347) 00:08:50.306 fused_ordering(348) 00:08:50.306 fused_ordering(349) 00:08:50.306 fused_ordering(350) 00:08:50.306 fused_ordering(351) 00:08:50.306 fused_ordering(352) 00:08:50.306 fused_ordering(353) 00:08:50.306 fused_ordering(354) 00:08:50.306 fused_ordering(355) 00:08:50.306 fused_ordering(356) 00:08:50.306 fused_ordering(357) 00:08:50.306 fused_ordering(358) 00:08:50.306 fused_ordering(359) 00:08:50.306 fused_ordering(360) 00:08:50.306 fused_ordering(361) 00:08:50.306 fused_ordering(362) 00:08:50.306 fused_ordering(363) 00:08:50.306 fused_ordering(364) 00:08:50.306 fused_ordering(365) 00:08:50.306 fused_ordering(366) 00:08:50.306 fused_ordering(367) 00:08:50.306 fused_ordering(368) 00:08:50.306 fused_ordering(369) 00:08:50.306 fused_ordering(370) 00:08:50.306 fused_ordering(371) 00:08:50.306 fused_ordering(372) 00:08:50.306 fused_ordering(373) 00:08:50.306 fused_ordering(374) 00:08:50.306 fused_ordering(375) 00:08:50.306 fused_ordering(376) 00:08:50.306 fused_ordering(377) 00:08:50.306 fused_ordering(378) 00:08:50.306 fused_ordering(379) 00:08:50.306 fused_ordering(380) 00:08:50.306 fused_ordering(381) 00:08:50.306 fused_ordering(382) 00:08:50.306 fused_ordering(383) 00:08:50.306 fused_ordering(384) 00:08:50.306 fused_ordering(385) 00:08:50.306 fused_ordering(386) 00:08:50.306 fused_ordering(387) 00:08:50.306 fused_ordering(388) 00:08:50.306 fused_ordering(389) 00:08:50.306 fused_ordering(390) 00:08:50.306 fused_ordering(391) 00:08:50.306 fused_ordering(392) 00:08:50.306 fused_ordering(393) 00:08:50.306 fused_ordering(394) 00:08:50.306 fused_ordering(395) 00:08:50.306 fused_ordering(396) 00:08:50.306 fused_ordering(397) 00:08:50.306 fused_ordering(398) 00:08:50.306 fused_ordering(399) 00:08:50.306 fused_ordering(400) 00:08:50.306 fused_ordering(401) 00:08:50.306 fused_ordering(402) 00:08:50.306 fused_ordering(403) 00:08:50.306 fused_ordering(404) 00:08:50.306 fused_ordering(405) 00:08:50.306 fused_ordering(406) 00:08:50.306 fused_ordering(407) 00:08:50.306 fused_ordering(408) 00:08:50.306 fused_ordering(409) 00:08:50.306 fused_ordering(410) 00:08:51.239 fused_ordering(411) 00:08:51.239 fused_ordering(412) 00:08:51.239 fused_ordering(413) 00:08:51.239 fused_ordering(414) 00:08:51.239 fused_ordering(415) 00:08:51.239 fused_ordering(416) 00:08:51.239 fused_ordering(417) 00:08:51.239 fused_ordering(418) 00:08:51.239 fused_ordering(419) 00:08:51.239 fused_ordering(420) 00:08:51.239 fused_ordering(421) 00:08:51.239 fused_ordering(422) 00:08:51.239 fused_ordering(423) 00:08:51.239 fused_ordering(424) 00:08:51.239 fused_ordering(425) 00:08:51.239 fused_ordering(426) 00:08:51.239 fused_ordering(427) 00:08:51.239 fused_ordering(428) 00:08:51.239 fused_ordering(429) 00:08:51.239 fused_ordering(430) 00:08:51.239 fused_ordering(431) 00:08:51.239 fused_ordering(432) 00:08:51.239 fused_ordering(433) 00:08:51.239 fused_ordering(434) 00:08:51.239 fused_ordering(435) 00:08:51.239 fused_ordering(436) 00:08:51.239 fused_ordering(437) 00:08:51.239 fused_ordering(438) 00:08:51.239 fused_ordering(439) 00:08:51.239 fused_ordering(440) 00:08:51.239 fused_ordering(441) 00:08:51.239 fused_ordering(442) 00:08:51.239 fused_ordering(443) 00:08:51.239 fused_ordering(444) 00:08:51.239 fused_ordering(445) 00:08:51.239 fused_ordering(446) 00:08:51.239 fused_ordering(447) 00:08:51.239 fused_ordering(448) 00:08:51.239 fused_ordering(449) 00:08:51.239 fused_ordering(450) 00:08:51.239 fused_ordering(451) 00:08:51.239 fused_ordering(452) 00:08:51.239 fused_ordering(453) 00:08:51.239 fused_ordering(454) 00:08:51.239 fused_ordering(455) 00:08:51.239 fused_ordering(456) 00:08:51.239 fused_ordering(457) 00:08:51.239 fused_ordering(458) 00:08:51.239 fused_ordering(459) 00:08:51.239 fused_ordering(460) 00:08:51.239 fused_ordering(461) 00:08:51.239 fused_ordering(462) 00:08:51.239 fused_ordering(463) 00:08:51.239 fused_ordering(464) 00:08:51.239 fused_ordering(465) 00:08:51.239 fused_ordering(466) 00:08:51.239 fused_ordering(467) 00:08:51.239 fused_ordering(468) 00:08:51.239 fused_ordering(469) 00:08:51.239 fused_ordering(470) 00:08:51.239 fused_ordering(471) 00:08:51.239 fused_ordering(472) 00:08:51.239 fused_ordering(473) 00:08:51.239 fused_ordering(474) 00:08:51.239 fused_ordering(475) 00:08:51.239 fused_ordering(476) 00:08:51.239 fused_ordering(477) 00:08:51.239 fused_ordering(478) 00:08:51.239 fused_ordering(479) 00:08:51.239 fused_ordering(480) 00:08:51.239 fused_ordering(481) 00:08:51.239 fused_ordering(482) 00:08:51.239 fused_ordering(483) 00:08:51.239 fused_ordering(484) 00:08:51.239 fused_ordering(485) 00:08:51.239 fused_ordering(486) 00:08:51.239 fused_ordering(487) 00:08:51.239 fused_ordering(488) 00:08:51.239 fused_ordering(489) 00:08:51.239 fused_ordering(490) 00:08:51.239 fused_ordering(491) 00:08:51.239 fused_ordering(492) 00:08:51.239 fused_ordering(493) 00:08:51.239 fused_ordering(494) 00:08:51.239 fused_ordering(495) 00:08:51.239 fused_ordering(496) 00:08:51.239 fused_ordering(497) 00:08:51.239 fused_ordering(498) 00:08:51.239 fused_ordering(499) 00:08:51.239 fused_ordering(500) 00:08:51.239 fused_ordering(501) 00:08:51.239 fused_ordering(502) 00:08:51.239 fused_ordering(503) 00:08:51.239 fused_ordering(504) 00:08:51.239 fused_ordering(505) 00:08:51.239 fused_ordering(506) 00:08:51.239 fused_ordering(507) 00:08:51.239 fused_ordering(508) 00:08:51.239 fused_ordering(509) 00:08:51.239 fused_ordering(510) 00:08:51.239 fused_ordering(511) 00:08:51.239 fused_ordering(512) 00:08:51.239 fused_ordering(513) 00:08:51.239 fused_ordering(514) 00:08:51.239 fused_ordering(515) 00:08:51.239 fused_ordering(516) 00:08:51.239 fused_ordering(517) 00:08:51.239 fused_ordering(518) 00:08:51.239 fused_ordering(519) 00:08:51.239 fused_ordering(520) 00:08:51.239 fused_ordering(521) 00:08:51.239 fused_ordering(522) 00:08:51.239 fused_ordering(523) 00:08:51.239 fused_ordering(524) 00:08:51.239 fused_ordering(525) 00:08:51.239 fused_ordering(526) 00:08:51.239 fused_ordering(527) 00:08:51.239 fused_ordering(528) 00:08:51.239 fused_ordering(529) 00:08:51.239 fused_ordering(530) 00:08:51.239 fused_ordering(531) 00:08:51.239 fused_ordering(532) 00:08:51.239 fused_ordering(533) 00:08:51.239 fused_ordering(534) 00:08:51.239 fused_ordering(535) 00:08:51.239 fused_ordering(536) 00:08:51.239 fused_ordering(537) 00:08:51.239 fused_ordering(538) 00:08:51.239 fused_ordering(539) 00:08:51.239 fused_ordering(540) 00:08:51.239 fused_ordering(541) 00:08:51.239 fused_ordering(542) 00:08:51.239 fused_ordering(543) 00:08:51.239 fused_ordering(544) 00:08:51.239 fused_ordering(545) 00:08:51.239 fused_ordering(546) 00:08:51.239 fused_ordering(547) 00:08:51.239 fused_ordering(548) 00:08:51.239 fused_ordering(549) 00:08:51.239 fused_ordering(550) 00:08:51.239 fused_ordering(551) 00:08:51.239 fused_ordering(552) 00:08:51.239 fused_ordering(553) 00:08:51.239 fused_ordering(554) 00:08:51.239 fused_ordering(555) 00:08:51.239 fused_ordering(556) 00:08:51.239 fused_ordering(557) 00:08:51.239 fused_ordering(558) 00:08:51.239 fused_ordering(559) 00:08:51.239 fused_ordering(560) 00:08:51.239 fused_ordering(561) 00:08:51.239 fused_ordering(562) 00:08:51.239 fused_ordering(563) 00:08:51.239 fused_ordering(564) 00:08:51.239 fused_ordering(565) 00:08:51.239 fused_ordering(566) 00:08:51.239 fused_ordering(567) 00:08:51.239 fused_ordering(568) 00:08:51.239 fused_ordering(569) 00:08:51.239 fused_ordering(570) 00:08:51.239 fused_ordering(571) 00:08:51.239 fused_ordering(572) 00:08:51.239 fused_ordering(573) 00:08:51.239 fused_ordering(574) 00:08:51.239 fused_ordering(575) 00:08:51.239 fused_ordering(576) 00:08:51.239 fused_ordering(577) 00:08:51.239 fused_ordering(578) 00:08:51.239 fused_ordering(579) 00:08:51.239 fused_ordering(580) 00:08:51.239 fused_ordering(581) 00:08:51.239 fused_ordering(582) 00:08:51.239 fused_ordering(583) 00:08:51.239 fused_ordering(584) 00:08:51.239 fused_ordering(585) 00:08:51.239 fused_ordering(586) 00:08:51.239 fused_ordering(587) 00:08:51.239 fused_ordering(588) 00:08:51.239 fused_ordering(589) 00:08:51.239 fused_ordering(590) 00:08:51.239 fused_ordering(591) 00:08:51.239 fused_ordering(592) 00:08:51.239 fused_ordering(593) 00:08:51.239 fused_ordering(594) 00:08:51.239 fused_ordering(595) 00:08:51.240 fused_ordering(596) 00:08:51.240 fused_ordering(597) 00:08:51.240 fused_ordering(598) 00:08:51.240 fused_ordering(599) 00:08:51.240 fused_ordering(600) 00:08:51.240 fused_ordering(601) 00:08:51.240 fused_ordering(602) 00:08:51.240 fused_ordering(603) 00:08:51.240 fused_ordering(604) 00:08:51.240 fused_ordering(605) 00:08:51.240 fused_ordering(606) 00:08:51.240 fused_ordering(607) 00:08:51.240 fused_ordering(608) 00:08:51.240 fused_ordering(609) 00:08:51.240 fused_ordering(610) 00:08:51.240 fused_ordering(611) 00:08:51.240 fused_ordering(612) 00:08:51.240 fused_ordering(613) 00:08:51.240 fused_ordering(614) 00:08:51.240 fused_ordering(615) 00:08:52.176 fused_ordering(616) 00:08:52.176 fused_ordering(617) 00:08:52.176 fused_ordering(618) 00:08:52.176 fused_ordering(619) 00:08:52.176 fused_ordering(620) 00:08:52.176 fused_ordering(621) 00:08:52.176 fused_ordering(622) 00:08:52.176 fused_ordering(623) 00:08:52.176 fused_ordering(624) 00:08:52.176 fused_ordering(625) 00:08:52.176 fused_ordering(626) 00:08:52.176 fused_ordering(627) 00:08:52.176 fused_ordering(628) 00:08:52.176 fused_ordering(629) 00:08:52.176 fused_ordering(630) 00:08:52.176 fused_ordering(631) 00:08:52.176 fused_ordering(632) 00:08:52.176 fused_ordering(633) 00:08:52.176 fused_ordering(634) 00:08:52.176 fused_ordering(635) 00:08:52.176 fused_ordering(636) 00:08:52.176 fused_ordering(637) 00:08:52.176 fused_ordering(638) 00:08:52.176 fused_ordering(639) 00:08:52.176 fused_ordering(640) 00:08:52.176 fused_ordering(641) 00:08:52.176 fused_ordering(642) 00:08:52.176 fused_ordering(643) 00:08:52.176 fused_ordering(644) 00:08:52.176 fused_ordering(645) 00:08:52.176 fused_ordering(646) 00:08:52.176 fused_ordering(647) 00:08:52.176 fused_ordering(648) 00:08:52.176 fused_ordering(649) 00:08:52.176 fused_ordering(650) 00:08:52.176 fused_ordering(651) 00:08:52.176 fused_ordering(652) 00:08:52.176 fused_ordering(653) 00:08:52.176 fused_ordering(654) 00:08:52.176 fused_ordering(655) 00:08:52.176 fused_ordering(656) 00:08:52.176 fused_ordering(657) 00:08:52.176 fused_ordering(658) 00:08:52.176 fused_ordering(659) 00:08:52.176 fused_ordering(660) 00:08:52.176 fused_ordering(661) 00:08:52.176 fused_ordering(662) 00:08:52.176 fused_ordering(663) 00:08:52.176 fused_ordering(664) 00:08:52.176 fused_ordering(665) 00:08:52.176 fused_ordering(666) 00:08:52.176 fused_ordering(667) 00:08:52.176 fused_ordering(668) 00:08:52.176 fused_ordering(669) 00:08:52.176 fused_ordering(670) 00:08:52.176 fused_ordering(671) 00:08:52.176 fused_ordering(672) 00:08:52.176 fused_ordering(673) 00:08:52.176 fused_ordering(674) 00:08:52.176 fused_ordering(675) 00:08:52.176 fused_ordering(676) 00:08:52.176 fused_ordering(677) 00:08:52.176 fused_ordering(678) 00:08:52.176 fused_ordering(679) 00:08:52.176 fused_ordering(680) 00:08:52.176 fused_ordering(681) 00:08:52.176 fused_ordering(682) 00:08:52.176 fused_ordering(683) 00:08:52.176 fused_ordering(684) 00:08:52.176 fused_ordering(685) 00:08:52.176 fused_ordering(686) 00:08:52.176 fused_ordering(687) 00:08:52.176 fused_ordering(688) 00:08:52.176 fused_ordering(689) 00:08:52.176 fused_ordering(690) 00:08:52.176 fused_ordering(691) 00:08:52.176 fused_ordering(692) 00:08:52.176 fused_ordering(693) 00:08:52.176 fused_ordering(694) 00:08:52.176 fused_ordering(695) 00:08:52.176 fused_ordering(696) 00:08:52.176 fused_ordering(697) 00:08:52.176 fused_ordering(698) 00:08:52.176 fused_ordering(699) 00:08:52.176 fused_ordering(700) 00:08:52.176 fused_ordering(701) 00:08:52.176 fused_ordering(702) 00:08:52.176 fused_ordering(703) 00:08:52.176 fused_ordering(704) 00:08:52.176 fused_ordering(705) 00:08:52.176 fused_ordering(706) 00:08:52.176 fused_ordering(707) 00:08:52.176 fused_ordering(708) 00:08:52.176 fused_ordering(709) 00:08:52.176 fused_ordering(710) 00:08:52.176 fused_ordering(711) 00:08:52.176 fused_ordering(712) 00:08:52.176 fused_ordering(713) 00:08:52.176 fused_ordering(714) 00:08:52.176 fused_ordering(715) 00:08:52.176 fused_ordering(716) 00:08:52.176 fused_ordering(717) 00:08:52.176 fused_ordering(718) 00:08:52.176 fused_ordering(719) 00:08:52.176 fused_ordering(720) 00:08:52.176 fused_ordering(721) 00:08:52.176 fused_ordering(722) 00:08:52.176 fused_ordering(723) 00:08:52.176 fused_ordering(724) 00:08:52.176 fused_ordering(725) 00:08:52.176 fused_ordering(726) 00:08:52.176 fused_ordering(727) 00:08:52.176 fused_ordering(728) 00:08:52.176 fused_ordering(729) 00:08:52.176 fused_ordering(730) 00:08:52.176 fused_ordering(731) 00:08:52.176 fused_ordering(732) 00:08:52.176 fused_ordering(733) 00:08:52.176 fused_ordering(734) 00:08:52.176 fused_ordering(735) 00:08:52.176 fused_ordering(736) 00:08:52.176 fused_ordering(737) 00:08:52.176 fused_ordering(738) 00:08:52.176 fused_ordering(739) 00:08:52.176 fused_ordering(740) 00:08:52.176 fused_ordering(741) 00:08:52.176 fused_ordering(742) 00:08:52.176 fused_ordering(743) 00:08:52.176 fused_ordering(744) 00:08:52.176 fused_ordering(745) 00:08:52.176 fused_ordering(746) 00:08:52.176 fused_ordering(747) 00:08:52.176 fused_ordering(748) 00:08:52.176 fused_ordering(749) 00:08:52.176 fused_ordering(750) 00:08:52.176 fused_ordering(751) 00:08:52.176 fused_ordering(752) 00:08:52.176 fused_ordering(753) 00:08:52.176 fused_ordering(754) 00:08:52.176 fused_ordering(755) 00:08:52.176 fused_ordering(756) 00:08:52.176 fused_ordering(757) 00:08:52.176 fused_ordering(758) 00:08:52.176 fused_ordering(759) 00:08:52.176 fused_ordering(760) 00:08:52.176 fused_ordering(761) 00:08:52.176 fused_ordering(762) 00:08:52.176 fused_ordering(763) 00:08:52.176 fused_ordering(764) 00:08:52.176 fused_ordering(765) 00:08:52.176 fused_ordering(766) 00:08:52.176 fused_ordering(767) 00:08:52.176 fused_ordering(768) 00:08:52.176 fused_ordering(769) 00:08:52.176 fused_ordering(770) 00:08:52.176 fused_ordering(771) 00:08:52.176 fused_ordering(772) 00:08:52.176 fused_ordering(773) 00:08:52.176 fused_ordering(774) 00:08:52.176 fused_ordering(775) 00:08:52.176 fused_ordering(776) 00:08:52.176 fused_ordering(777) 00:08:52.176 fused_ordering(778) 00:08:52.176 fused_ordering(779) 00:08:52.176 fused_ordering(780) 00:08:52.176 fused_ordering(781) 00:08:52.176 fused_ordering(782) 00:08:52.176 fused_ordering(783) 00:08:52.176 fused_ordering(784) 00:08:52.176 fused_ordering(785) 00:08:52.176 fused_ordering(786) 00:08:52.176 fused_ordering(787) 00:08:52.176 fused_ordering(788) 00:08:52.176 fused_ordering(789) 00:08:52.176 fused_ordering(790) 00:08:52.176 fused_ordering(791) 00:08:52.176 fused_ordering(792) 00:08:52.176 fused_ordering(793) 00:08:52.176 fused_ordering(794) 00:08:52.176 fused_ordering(795) 00:08:52.176 fused_ordering(796) 00:08:52.176 fused_ordering(797) 00:08:52.176 fused_ordering(798) 00:08:52.176 fused_ordering(799) 00:08:52.176 fused_ordering(800) 00:08:52.176 fused_ordering(801) 00:08:52.176 fused_ordering(802) 00:08:52.176 fused_ordering(803) 00:08:52.176 fused_ordering(804) 00:08:52.176 fused_ordering(805) 00:08:52.176 fused_ordering(806) 00:08:52.176 fused_ordering(807) 00:08:52.176 fused_ordering(808) 00:08:52.176 fused_ordering(809) 00:08:52.176 fused_ordering(810) 00:08:52.176 fused_ordering(811) 00:08:52.176 fused_ordering(812) 00:08:52.176 fused_ordering(813) 00:08:52.176 fused_ordering(814) 00:08:52.176 fused_ordering(815) 00:08:52.176 fused_ordering(816) 00:08:52.176 fused_ordering(817) 00:08:52.176 fused_ordering(818) 00:08:52.176 fused_ordering(819) 00:08:52.176 fused_ordering(820) 00:08:53.111 fused_ordering(821) 00:08:53.111 fused_ordering(822) 00:08:53.111 fused_ordering(823) 00:08:53.111 fused_ordering(824) 00:08:53.111 fused_ordering(825) 00:08:53.111 fused_ordering(826) 00:08:53.111 fused_ordering(827) 00:08:53.111 fused_ordering(828) 00:08:53.111 fused_ordering(829) 00:08:53.111 fused_ordering(830) 00:08:53.111 fused_ordering(831) 00:08:53.111 fused_ordering(832) 00:08:53.111 fused_ordering(833) 00:08:53.111 fused_ordering(834) 00:08:53.111 fused_ordering(835) 00:08:53.111 fused_ordering(836) 00:08:53.111 fused_ordering(837) 00:08:53.111 fused_ordering(838) 00:08:53.111 fused_ordering(839) 00:08:53.111 fused_ordering(840) 00:08:53.111 fused_ordering(841) 00:08:53.111 fused_ordering(842) 00:08:53.111 fused_ordering(843) 00:08:53.111 fused_ordering(844) 00:08:53.111 fused_ordering(845) 00:08:53.111 fused_ordering(846) 00:08:53.111 fused_ordering(847) 00:08:53.111 fused_ordering(848) 00:08:53.111 fused_ordering(849) 00:08:53.111 fused_ordering(850) 00:08:53.111 fused_ordering(851) 00:08:53.111 fused_ordering(852) 00:08:53.111 fused_ordering(853) 00:08:53.111 fused_ordering(854) 00:08:53.111 fused_ordering(855) 00:08:53.111 fused_ordering(856) 00:08:53.111 fused_ordering(857) 00:08:53.111 fused_ordering(858) 00:08:53.111 fused_ordering(859) 00:08:53.111 fused_ordering(860) 00:08:53.111 fused_ordering(861) 00:08:53.111 fused_ordering(862) 00:08:53.111 fused_ordering(863) 00:08:53.111 fused_ordering(864) 00:08:53.111 fused_ordering(865) 00:08:53.111 fused_ordering(866) 00:08:53.111 fused_ordering(867) 00:08:53.111 fused_ordering(868) 00:08:53.111 fused_ordering(869) 00:08:53.111 fused_ordering(870) 00:08:53.111 fused_ordering(871) 00:08:53.111 fused_ordering(872) 00:08:53.111 fused_ordering(873) 00:08:53.111 fused_ordering(874) 00:08:53.111 fused_ordering(875) 00:08:53.111 fused_ordering(876) 00:08:53.111 fused_ordering(877) 00:08:53.111 fused_ordering(878) 00:08:53.111 fused_ordering(879) 00:08:53.111 fused_ordering(880) 00:08:53.111 fused_ordering(881) 00:08:53.111 fused_ordering(882) 00:08:53.111 fused_ordering(883) 00:08:53.111 fused_ordering(884) 00:08:53.111 fused_ordering(885) 00:08:53.111 fused_ordering(886) 00:08:53.111 fused_ordering(887) 00:08:53.111 fused_ordering(888) 00:08:53.111 fused_ordering(889) 00:08:53.111 fused_ordering(890) 00:08:53.111 fused_ordering(891) 00:08:53.111 fused_ordering(892) 00:08:53.111 fused_ordering(893) 00:08:53.111 fused_ordering(894) 00:08:53.111 fused_ordering(895) 00:08:53.111 fused_ordering(896) 00:08:53.111 fused_ordering(897) 00:08:53.111 fused_ordering(898) 00:08:53.111 fused_ordering(899) 00:08:53.111 fused_ordering(900) 00:08:53.111 fused_ordering(901) 00:08:53.111 fused_ordering(902) 00:08:53.111 fused_ordering(903) 00:08:53.111 fused_ordering(904) 00:08:53.111 fused_ordering(905) 00:08:53.111 fused_ordering(906) 00:08:53.111 fused_ordering(907) 00:08:53.111 fused_ordering(908) 00:08:53.111 fused_ordering(909) 00:08:53.111 fused_ordering(910) 00:08:53.111 fused_ordering(911) 00:08:53.111 fused_ordering(912) 00:08:53.111 fused_ordering(913) 00:08:53.111 fused_ordering(914) 00:08:53.111 fused_ordering(915) 00:08:53.111 fused_ordering(916) 00:08:53.111 fused_ordering(917) 00:08:53.111 fused_ordering(918) 00:08:53.111 fused_ordering(919) 00:08:53.111 fused_ordering(920) 00:08:53.111 fused_ordering(921) 00:08:53.111 fused_ordering(922) 00:08:53.111 fused_ordering(923) 00:08:53.111 fused_ordering(924) 00:08:53.111 fused_ordering(925) 00:08:53.111 fused_ordering(926) 00:08:53.111 fused_ordering(927) 00:08:53.111 fused_ordering(928) 00:08:53.111 fused_ordering(929) 00:08:53.111 fused_ordering(930) 00:08:53.111 fused_ordering(931) 00:08:53.111 fused_ordering(932) 00:08:53.111 fused_ordering(933) 00:08:53.111 fused_ordering(934) 00:08:53.111 fused_ordering(935) 00:08:53.111 fused_ordering(936) 00:08:53.111 fused_ordering(937) 00:08:53.111 fused_ordering(938) 00:08:53.111 fused_ordering(939) 00:08:53.111 fused_ordering(940) 00:08:53.111 fused_ordering(941) 00:08:53.111 fused_ordering(942) 00:08:53.111 fused_ordering(943) 00:08:53.111 fused_ordering(944) 00:08:53.111 fused_ordering(945) 00:08:53.111 fused_ordering(946) 00:08:53.111 fused_ordering(947) 00:08:53.111 fused_ordering(948) 00:08:53.111 fused_ordering(949) 00:08:53.111 fused_ordering(950) 00:08:53.111 fused_ordering(951) 00:08:53.111 fused_ordering(952) 00:08:53.111 fused_ordering(953) 00:08:53.111 fused_ordering(954) 00:08:53.111 fused_ordering(955) 00:08:53.111 fused_ordering(956) 00:08:53.111 fused_ordering(957) 00:08:53.111 fused_ordering(958) 00:08:53.111 fused_ordering(959) 00:08:53.111 fused_ordering(960) 00:08:53.111 fused_ordering(961) 00:08:53.111 fused_ordering(962) 00:08:53.111 fused_ordering(963) 00:08:53.111 fused_ordering(964) 00:08:53.111 fused_ordering(965) 00:08:53.111 fused_ordering(966) 00:08:53.111 fused_ordering(967) 00:08:53.111 fused_ordering(968) 00:08:53.111 fused_ordering(969) 00:08:53.111 fused_ordering(970) 00:08:53.111 fused_ordering(971) 00:08:53.111 fused_ordering(972) 00:08:53.111 fused_ordering(973) 00:08:53.111 fused_ordering(974) 00:08:53.111 fused_ordering(975) 00:08:53.111 fused_ordering(976) 00:08:53.111 fused_ordering(977) 00:08:53.111 fused_ordering(978) 00:08:53.111 fused_ordering(979) 00:08:53.111 fused_ordering(980) 00:08:53.111 fused_ordering(981) 00:08:53.111 fused_ordering(982) 00:08:53.111 fused_ordering(983) 00:08:53.111 fused_ordering(984) 00:08:53.111 fused_ordering(985) 00:08:53.111 fused_ordering(986) 00:08:53.111 fused_ordering(987) 00:08:53.111 fused_ordering(988) 00:08:53.111 fused_ordering(989) 00:08:53.111 fused_ordering(990) 00:08:53.111 fused_ordering(991) 00:08:53.111 fused_ordering(992) 00:08:53.111 fused_ordering(993) 00:08:53.111 fused_ordering(994) 00:08:53.111 fused_ordering(995) 00:08:53.111 fused_ordering(996) 00:08:53.111 fused_ordering(997) 00:08:53.111 fused_ordering(998) 00:08:53.111 fused_ordering(999) 00:08:53.111 fused_ordering(1000) 00:08:53.111 fused_ordering(1001) 00:08:53.111 fused_ordering(1002) 00:08:53.111 fused_ordering(1003) 00:08:53.111 fused_ordering(1004) 00:08:53.111 fused_ordering(1005) 00:08:53.111 fused_ordering(1006) 00:08:53.111 fused_ordering(1007) 00:08:53.111 fused_ordering(1008) 00:08:53.111 fused_ordering(1009) 00:08:53.111 fused_ordering(1010) 00:08:53.111 fused_ordering(1011) 00:08:53.111 fused_ordering(1012) 00:08:53.111 fused_ordering(1013) 00:08:53.111 fused_ordering(1014) 00:08:53.111 fused_ordering(1015) 00:08:53.111 fused_ordering(1016) 00:08:53.111 fused_ordering(1017) 00:08:53.111 fused_ordering(1018) 00:08:53.111 fused_ordering(1019) 00:08:53.111 fused_ordering(1020) 00:08:53.111 fused_ordering(1021) 00:08:53.111 fused_ordering(1022) 00:08:53.111 fused_ordering(1023) 00:08:53.111 04:09:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:53.111 04:09:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:53.111 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.111 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:53.111 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.111 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:53.111 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.112 rmmod nvme_tcp 00:08:53.112 rmmod nvme_fabrics 00:08:53.112 rmmod nvme_keyring 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3305824 ']' 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3305824 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3305824 ']' 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3305824 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3305824 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3305824' 00:08:53.112 killing process with pid 3305824 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3305824 00:08:53.112 [2024-05-15 04:09:40.900893] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:53.112 04:09:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3305824 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.371 04:09:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.284 04:09:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.284 00:08:55.284 real 0m9.712s 00:08:55.284 user 0m6.898s 00:08:55.284 sys 0m5.238s 00:08:55.284 04:09:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:55.284 04:09:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:55.284 ************************************ 00:08:55.284 END TEST nvmf_fused_ordering 00:08:55.284 ************************************ 00:08:55.284 04:09:43 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:55.284 04:09:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:55.284 04:09:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:55.284 04:09:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.284 ************************************ 00:08:55.284 START TEST nvmf_delete_subsystem 00:08:55.284 ************************************ 00:08:55.284 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:55.612 * Looking for test storage... 00:08:55.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.612 04:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:58.142 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:58.142 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:58.142 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:58.142 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:08:58.142 00:08:58.142 --- 10.0.0.2 ping statistics --- 00:08:58.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.142 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:08:58.142 00:08:58.142 --- 10.0.0.1 ping statistics --- 00:08:58.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.142 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.142 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3308726 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3308726 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3308726 ']' 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:58.143 04:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.143 [2024-05-15 04:09:46.003166] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:08:58.143 [2024-05-15 04:09:46.003246] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.143 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.143 [2024-05-15 04:09:46.079020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.401 [2024-05-15 04:09:46.189822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.401 [2024-05-15 04:09:46.189875] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.401 [2024-05-15 04:09:46.189903] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.401 [2024-05-15 04:09:46.189914] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.401 [2024-05-15 04:09:46.189923] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.401 [2024-05-15 04:09:46.190003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.401 [2024-05-15 04:09:46.190007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.967 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:58.967 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:08:58.967 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.967 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.967 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.225 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.225 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.225 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.225 04:09:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.225 [2024-05-15 04:09:47.000154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.225 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.226 [2024-05-15 04:09:47.016125] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:59.226 [2024-05-15 04:09:47.016358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.226 NULL1 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.226 Delay0 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3308878 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:59.226 04:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:59.226 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.226 [2024-05-15 04:09:47.091092] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:01.123 04:09:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.123 04:09:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.123 04:09:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 [2024-05-15 04:09:49.223187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd0a0000c00 is same with the state(5) to be set 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Write completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 Read completed with error (sct=0, sc=8) 00:09:01.381 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Write completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 Read completed with error (sct=0, sc=8) 00:09:01.382 starting I/O failed: -6 00:09:01.382 starting I/O failed: -6 00:09:01.382 starting I/O failed: -6 00:09:01.382 starting I/O failed: -6 00:09:02.316 [2024-05-15 04:09:50.190573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12377f0 is same with the state(5) to be set 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 [2024-05-15 04:09:50.225737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218880 is same with the state(5) to be set 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 [2024-05-15 04:09:50.226120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e790 is same with the state(5) to be set 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 [2024-05-15 04:09:50.226261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd0a000c2f0 is same with the state(5) to be set 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Write completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 Read completed with error (sct=0, sc=8) 00:09:02.316 [2024-05-15 04:09:50.226532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1217e10 is same with the state(5) to be set 00:09:02.316 Initializing NVMe Controllers 00:09:02.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:02.316 Controller IO queue size 128, less than required. 00:09:02.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:02.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:02.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:02.316 Initialization complete. Launching workers. 00:09:02.316 ======================================================== 00:09:02.316 Latency(us) 00:09:02.316 Device Information : IOPS MiB/s Average min max 00:09:02.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.14 0.08 1057216.08 1103.53 2002630.85 00:09:02.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.85 0.07 912122.17 564.65 1013247.02 00:09:02.316 ======================================================== 00:09:02.316 Total : 319.98 0.16 990630.35 564.65 2002630.85 00:09:02.316 00:09:02.316 [2024-05-15 04:09:50.227864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12377f0 (9): Bad file descriptor 00:09:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:02.316 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.316 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:02.316 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3308878 00:09:02.316 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3308878 00:09:02.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3308878) - No such process 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3308878 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3308878 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3308878 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.883 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.884 [2024-05-15 04:09:50.749974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3309278 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:02.884 04:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:02.884 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.884 [2024-05-15 04:09:50.813014] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:03.449 04:09:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.449 04:09:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:03.449 04:09:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.015 04:09:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.015 04:09:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:04.015 04:09:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.272 04:09:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.272 04:09:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:04.272 04:09:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.835 04:09:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.835 04:09:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:04.835 04:09:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.399 04:09:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:05.399 04:09:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:05.399 04:09:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.962 04:09:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:05.962 04:09:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:05.962 04:09:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.962 Initializing NVMe Controllers 00:09:05.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:05.962 Controller IO queue size 128, less than required. 00:09:05.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:05.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:05.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:05.962 Initialization complete. Launching workers. 00:09:05.962 ======================================================== 00:09:05.962 Latency(us) 00:09:05.962 Device Information : IOPS MiB/s Average min max 00:09:05.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003974.19 1000229.13 1014434.91 00:09:05.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005739.12 1000310.98 1041691.65 00:09:05.962 ======================================================== 00:09:05.962 Total : 256.00 0.12 1004856.66 1000229.13 1041691.65 00:09:05.962 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3309278 00:09:06.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3309278) - No such process 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3309278 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.528 rmmod nvme_tcp 00:09:06.528 rmmod nvme_fabrics 00:09:06.528 rmmod nvme_keyring 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3308726 ']' 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3308726 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3308726 ']' 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3308726 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3308726 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3308726' 00:09:06.528 killing process with pid 3308726 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3308726 00:09:06.528 [2024-05-15 04:09:54.358034] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:06.528 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3308726 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.787 04:09:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.688 04:09:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.688 00:09:08.688 real 0m13.404s 00:09:08.688 user 0m29.396s 00:09:08.688 sys 0m3.307s 00:09:08.688 04:09:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.688 04:09:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.688 ************************************ 00:09:08.688 END TEST nvmf_delete_subsystem 00:09:08.688 ************************************ 00:09:08.688 04:09:56 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:08.688 04:09:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:08.688 04:09:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.688 04:09:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.946 ************************************ 00:09:08.946 START TEST nvmf_ns_masking 00:09:08.946 ************************************ 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:08.946 * Looking for test storage... 00:09:08.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.946 04:09:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=15236bc7-8527-407d-943c-382fa91a303b 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.947 04:09:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:11.478 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:11.479 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:11.479 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:11.479 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:11.479 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.479 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:09:11.766 00:09:11.766 --- 10.0.0.2 ping statistics --- 00:09:11.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.766 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:11.766 00:09:11.766 --- 10.0.0.1 ping statistics --- 00:09:11.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.766 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3312035 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3312035 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3312035 ']' 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.766 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:11.767 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:11.767 [2024-05-15 04:09:59.613136] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:11.767 [2024-05-15 04:09:59.613212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.767 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.767 [2024-05-15 04:09:59.699133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.025 [2024-05-15 04:09:59.818155] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.025 [2024-05-15 04:09:59.818214] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.025 [2024-05-15 04:09:59.818231] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.025 [2024-05-15 04:09:59.818244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.025 [2024-05-15 04:09:59.818256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.025 [2024-05-15 04:09:59.818336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.025 [2024-05-15 04:09:59.818406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.025 [2024-05-15 04:09:59.818435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.025 [2024-05-15 04:09:59.818437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.025 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:12.025 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:09:12.025 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.025 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.025 04:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:12.025 04:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.025 04:09:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:12.283 [2024-05-15 04:10:00.244746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.283 04:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:12.283 04:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:12.283 04:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:12.542 Malloc1 00:09:12.542 04:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:12.800 Malloc2 00:09:12.800 04:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.058 04:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:13.316 04:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.583 [2024-05-15 04:10:01.532409] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:13.583 [2024-05-15 04:10:01.532695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.583 04:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:13.583 04:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 15236bc7-8527-407d-943c-382fa91a303b -a 10.0.0.2 -s 4420 -i 4 00:09:13.848 04:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.848 04:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:13.848 04:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.848 04:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:13.848 04:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:15.744 [ 0]:0x1 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8e46fe41ccb94541a6c00e64411cbd1b 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8e46fe41ccb94541a6c00e64411cbd1b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.744 04:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:16.309 [ 0]:0x1 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8e46fe41ccb94541a6c00e64411cbd1b 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8e46fe41ccb94541a6c00e64411cbd1b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:16.309 [ 1]:0x2 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:09:16.309 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.567 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.824 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:17.082 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:09:17.082 04:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 15236bc7-8527-407d-943c-382fa91a303b -a 10.0.0.2 -s 4420 -i 4 00:09:17.082 04:10:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:17.082 04:10:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:17.082 04:10:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.082 04:10:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:09:17.082 04:10:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:09:17.082 04:10:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:19.608 [ 0]:0x2 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:19.608 [ 0]:0x1 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.608 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:19.866 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8e46fe41ccb94541a6c00e64411cbd1b 00:09:19.866 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8e46fe41ccb94541a6c00e64411cbd1b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.866 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:19.866 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:19.867 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:19.867 [ 1]:0x2 00:09:19.867 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:19.867 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:19.867 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 00:09:19.867 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.867 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:20.125 04:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:20.125 [ 0]:0x2 00:09:20.125 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:20.125 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:20.125 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 00:09:20.125 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.125 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:09:20.125 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.382 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 15236bc7-8527-407d-943c-382fa91a303b -a 10.0.0.2 -s 4420 -i 4 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:20.640 04:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:23.167 04:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:23.167 04:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:23.167 04:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.167 04:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:23.167 04:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.167 04:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:23.167 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:23.168 [ 0]:0x1 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8e46fe41ccb94541a6c00e64411cbd1b 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8e46fe41ccb94541a6c00e64411cbd1b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:23.168 [ 1]:0x2 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.168 04:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.168 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:23.426 [ 0]:0x2 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.426 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:23.689 [2024-05-15 04:10:11.496900] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:23.689 request: 00:09:23.689 { 00:09:23.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.689 "nsid": 2, 00:09:23.689 "host": "nqn.2016-06.io.spdk:host1", 00:09:23.689 "method": "nvmf_ns_remove_host", 00:09:23.689 "req_id": 1 00:09:23.689 } 00:09:23.689 Got JSON-RPC error response 00:09:23.689 response: 00:09:23.689 { 00:09:23.689 "code": -32602, 00:09:23.689 "message": "Invalid parameters" 00:09:23.689 } 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:23.689 [ 0]:0x2 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ff4cbc0d7fba4a7c97dbb6d56bdbe7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:09:23.689 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.948 04:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.206 rmmod nvme_tcp 00:09:24.206 rmmod nvme_fabrics 00:09:24.206 rmmod nvme_keyring 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3312035 ']' 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3312035 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3312035 ']' 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3312035 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3312035 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3312035' 00:09:24.206 killing process with pid 3312035 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3312035 00:09:24.206 [2024-05-15 04:10:12.121014] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:24.206 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3312035 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.464 04:10:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.999 04:10:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:26.999 00:09:26.999 real 0m17.746s 00:09:26.999 user 0m53.993s 00:09:26.999 sys 0m4.153s 00:09:26.999 04:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:26.999 04:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:26.999 ************************************ 00:09:26.999 END TEST nvmf_ns_masking 00:09:26.999 ************************************ 00:09:26.999 04:10:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:26.999 04:10:14 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:26.999 04:10:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:27.000 04:10:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:27.000 04:10:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.000 ************************************ 00:09:27.000 START TEST nvmf_nvme_cli 00:09:27.000 ************************************ 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:27.000 * Looking for test storage... 00:09:27.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.000 04:10:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:29.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.538 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:29.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:29.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:29.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:09:29.539 00:09:29.539 --- 10.0.0.2 ping statistics --- 00:09:29.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.539 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:09:29.539 00:09:29.539 --- 10.0.0.1 ping statistics --- 00:09:29.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.539 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3316007 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3316007 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3316007 ']' 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:29.539 04:10:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:29.539 [2024-05-15 04:10:17.271974] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:29.539 [2024-05-15 04:10:17.272061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.539 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.539 [2024-05-15 04:10:17.354022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.539 [2024-05-15 04:10:17.474811] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.539 [2024-05-15 04:10:17.474880] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.539 [2024-05-15 04:10:17.474897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.539 [2024-05-15 04:10:17.474910] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.539 [2024-05-15 04:10:17.474922] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.539 [2024-05-15 04:10:17.475000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.539 [2024-05-15 04:10:17.475054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.539 [2024-05-15 04:10:17.475107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.539 [2024-05-15 04:10:17.475110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 [2024-05-15 04:10:18.230851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 Malloc0 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 Malloc1 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 [2024-05-15 04:10:18.315298] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:30.473 [2024-05-15 04:10:18.315629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:30.473 00:09:30.473 Discovery Log Number of Records 2, Generation counter 2 00:09:30.473 =====Discovery Log Entry 0====== 00:09:30.473 trtype: tcp 00:09:30.473 adrfam: ipv4 00:09:30.473 subtype: current discovery subsystem 00:09:30.473 treq: not required 00:09:30.473 portid: 0 00:09:30.473 trsvcid: 4420 00:09:30.473 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:30.473 traddr: 10.0.0.2 00:09:30.473 eflags: explicit discovery connections, duplicate discovery information 00:09:30.473 sectype: none 00:09:30.473 =====Discovery Log Entry 1====== 00:09:30.473 trtype: tcp 00:09:30.473 adrfam: ipv4 00:09:30.473 subtype: nvme subsystem 00:09:30.473 treq: not required 00:09:30.473 portid: 0 00:09:30.473 trsvcid: 4420 00:09:30.473 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:30.473 traddr: 10.0.0.2 00:09:30.473 eflags: none 00:09:30.473 sectype: none 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.473 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:30.474 04:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.474 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:30.474 04:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.040 04:10:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:31.040 04:10:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:09:31.040 04:10:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.040 04:10:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:31.040 04:10:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:31.040 04:10:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:33.568 /dev/nvme0n1 ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:33.568 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.827 rmmod nvme_tcp 00:09:33.827 rmmod nvme_fabrics 00:09:33.827 rmmod nvme_keyring 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3316007 ']' 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3316007 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3316007 ']' 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3316007 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3316007 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3316007' 00:09:33.827 killing process with pid 3316007 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3316007 00:09:33.827 [2024-05-15 04:10:21.684726] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:33.827 04:10:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3316007 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.085 04:10:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.621 04:10:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.621 00:09:36.621 real 0m9.529s 00:09:36.621 user 0m18.725s 00:09:36.621 sys 0m2.525s 00:09:36.621 04:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.621 04:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:36.621 ************************************ 00:09:36.621 END TEST nvmf_nvme_cli 00:09:36.621 ************************************ 00:09:36.621 04:10:24 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:36.621 04:10:24 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:36.621 04:10:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:36.621 04:10:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.621 04:10:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.621 ************************************ 00:09:36.621 START TEST nvmf_vfio_user 00:09:36.621 ************************************ 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:36.621 * Looking for test storage... 00:09:36.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.621 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3316949 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3316949' 00:09:36.622 Process pid: 3316949 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3316949 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3316949 ']' 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:36.622 [2024-05-15 04:10:24.238168] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:36.622 [2024-05-15 04:10:24.238266] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.622 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.622 [2024-05-15 04:10:24.306056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.622 [2024-05-15 04:10:24.415629] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.622 [2024-05-15 04:10:24.415679] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.622 [2024-05-15 04:10:24.415707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.622 [2024-05-15 04:10:24.415719] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.622 [2024-05-15 04:10:24.415729] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.622 [2024-05-15 04:10:24.415787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.622 [2024-05-15 04:10:24.415845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.622 [2024-05-15 04:10:24.415911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.622 [2024-05-15 04:10:24.415914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:09:36.622 04:10:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:37.555 04:10:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:37.813 04:10:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:37.813 04:10:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:37.813 04:10:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:37.813 04:10:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:37.813 04:10:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:38.071 Malloc1 00:09:38.071 04:10:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:38.329 04:10:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:38.586 04:10:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:38.843 [2024-05-15 04:10:26.784151] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:38.843 04:10:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:38.843 04:10:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:38.844 04:10:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:39.101 Malloc2 00:09:39.101 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:39.358 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:39.615 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:39.873 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:39.873 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:39.873 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:39.873 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:39.873 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:39.873 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:39.873 [2024-05-15 04:10:27.817121] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:09:39.873 [2024-05-15 04:10:27.817164] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317370 ] 00:09:39.873 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.873 [2024-05-15 04:10:27.850411] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:39.873 [2024-05-15 04:10:27.859418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:39.873 [2024-05-15 04:10:27.859445] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbcecd4e000 00:09:39.873 [2024-05-15 04:10:27.860413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.861406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.862412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.863416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.864423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.865424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.866432] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.867452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.873 [2024-05-15 04:10:27.868439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:39.873 [2024-05-15 04:10:27.868462] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbcecd43000 00:09:39.873 [2024-05-15 04:10:27.869579] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:39.873 [2024-05-15 04:10:27.884742] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:39.873 [2024-05-15 04:10:27.884800] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:40.133 [2024-05-15 04:10:27.889577] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:40.133 [2024-05-15 04:10:27.889644] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:40.133 [2024-05-15 04:10:27.889769] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:40.133 [2024-05-15 04:10:27.889816] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:40.133 [2024-05-15 04:10:27.889838] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:40.133 [2024-05-15 04:10:27.890560] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:40.133 [2024-05-15 04:10:27.890589] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:40.133 [2024-05-15 04:10:27.890610] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:40.133 [2024-05-15 04:10:27.891568] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:40.133 [2024-05-15 04:10:27.891590] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:40.133 [2024-05-15 04:10:27.891604] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:40.133 [2024-05-15 04:10:27.892569] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:40.133 [2024-05-15 04:10:27.892588] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:40.133 [2024-05-15 04:10:27.894945] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:40.133 [2024-05-15 04:10:27.894967] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:40.133 [2024-05-15 04:10:27.894991] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:40.133 [2024-05-15 04:10:27.895004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:40.133 [2024-05-15 04:10:27.895115] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:40.133 [2024-05-15 04:10:27.895124] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:40.133 [2024-05-15 04:10:27.895138] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:40.133 [2024-05-15 04:10:27.895592] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:40.133 [2024-05-15 04:10:27.896588] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:40.133 [2024-05-15 04:10:27.897602] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:40.133 [2024-05-15 04:10:27.898597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:40.133 [2024-05-15 04:10:27.898700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:40.133 [2024-05-15 04:10:27.899610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:40.133 [2024-05-15 04:10:27.899628] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:40.133 [2024-05-15 04:10:27.899637] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.899661] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:40.133 [2024-05-15 04:10:27.899675] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.899704] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.133 [2024-05-15 04:10:27.899715] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.133 [2024-05-15 04:10:27.899738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.133 [2024-05-15 04:10:27.899818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:40.133 [2024-05-15 04:10:27.899836] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:40.133 [2024-05-15 04:10:27.899845] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:40.133 [2024-05-15 04:10:27.899853] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:40.133 [2024-05-15 04:10:27.899860] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:40.133 [2024-05-15 04:10:27.899867] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:40.133 [2024-05-15 04:10:27.899875] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:40.133 [2024-05-15 04:10:27.899882] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.899898] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.899938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:40.133 [2024-05-15 04:10:27.899960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:40.133 [2024-05-15 04:10:27.899986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.133 [2024-05-15 04:10:27.900004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.133 [2024-05-15 04:10:27.900017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.133 [2024-05-15 04:10:27.900030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.133 [2024-05-15 04:10:27.900038] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900050] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:40.133 [2024-05-15 04:10:27.900076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:40.133 [2024-05-15 04:10:27.900088] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:40.133 [2024-05-15 04:10:27.900101] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900113] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900125] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:40.133 [2024-05-15 04:10:27.900156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:40.133 [2024-05-15 04:10:27.900213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900244] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900259] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:40.133 [2024-05-15 04:10:27.900267] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:40.133 [2024-05-15 04:10:27.900277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:40.133 [2024-05-15 04:10:27.900310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:40.133 [2024-05-15 04:10:27.900334] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:40.133 [2024-05-15 04:10:27.900355] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900370] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:40.133 [2024-05-15 04:10:27.900382] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.133 [2024-05-15 04:10:27.900389] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.133 [2024-05-15 04:10:27.900399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.133 [2024-05-15 04:10:27.900426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900446] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900459] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900471] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.134 [2024-05-15 04:10:27.900479] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.134 [2024-05-15 04:10:27.900488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900520] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900532] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900556] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900565] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900574] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:40.134 [2024-05-15 04:10:27.900582] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:40.134 [2024-05-15 04:10:27.900590] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:40.134 [2024-05-15 04:10:27.900623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900743] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:40.134 [2024-05-15 04:10:27.900752] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:40.134 [2024-05-15 04:10:27.900761] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:40.134 [2024-05-15 04:10:27.900768] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:40.134 [2024-05-15 04:10:27.900777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:40.134 [2024-05-15 04:10:27.900788] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:40.134 [2024-05-15 04:10:27.900796] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:40.134 [2024-05-15 04:10:27.900805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900815] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:40.134 [2024-05-15 04:10:27.900823] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.134 [2024-05-15 04:10:27.900831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900847] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:40.134 [2024-05-15 04:10:27.900856] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:40.134 [2024-05-15 04:10:27.900864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:40.134 [2024-05-15 04:10:27.900875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:40.134 [2024-05-15 04:10:27.900951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:40.134 ===================================================== 00:09:40.134 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:40.134 ===================================================== 00:09:40.134 Controller Capabilities/Features 00:09:40.134 ================================ 00:09:40.134 Vendor ID: 4e58 00:09:40.134 Subsystem Vendor ID: 4e58 00:09:40.134 Serial Number: SPDK1 00:09:40.134 Model Number: SPDK bdev Controller 00:09:40.134 Firmware Version: 24.05 00:09:40.134 Recommended Arb Burst: 6 00:09:40.134 IEEE OUI Identifier: 8d 6b 50 00:09:40.134 Multi-path I/O 00:09:40.134 May have multiple subsystem ports: Yes 00:09:40.134 May have multiple controllers: Yes 00:09:40.134 Associated with SR-IOV VF: No 00:09:40.134 Max Data Transfer Size: 131072 00:09:40.134 Max Number of Namespaces: 32 00:09:40.134 Max Number of I/O Queues: 127 00:09:40.134 NVMe Specification Version (VS): 1.3 00:09:40.134 NVMe Specification Version (Identify): 1.3 00:09:40.134 Maximum Queue Entries: 256 00:09:40.134 Contiguous Queues Required: Yes 00:09:40.134 Arbitration Mechanisms Supported 00:09:40.134 Weighted Round Robin: Not Supported 00:09:40.134 Vendor Specific: Not Supported 00:09:40.134 Reset Timeout: 15000 ms 00:09:40.134 Doorbell Stride: 4 bytes 00:09:40.134 NVM Subsystem Reset: Not Supported 00:09:40.134 Command Sets Supported 00:09:40.134 NVM Command Set: Supported 00:09:40.134 Boot Partition: Not Supported 00:09:40.134 Memory Page Size Minimum: 4096 bytes 00:09:40.134 Memory Page Size Maximum: 4096 bytes 00:09:40.134 Persistent Memory Region: Not Supported 00:09:40.134 Optional Asynchronous Events Supported 00:09:40.134 Namespace Attribute Notices: Supported 00:09:40.134 Firmware Activation Notices: Not Supported 00:09:40.134 ANA Change Notices: Not Supported 00:09:40.134 PLE Aggregate Log Change Notices: Not Supported 00:09:40.134 LBA Status Info Alert Notices: Not Supported 00:09:40.134 EGE Aggregate Log Change Notices: Not Supported 00:09:40.134 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.134 Zone Descriptor Change Notices: Not Supported 00:09:40.134 Discovery Log Change Notices: Not Supported 00:09:40.134 Controller Attributes 00:09:40.134 128-bit Host Identifier: Supported 00:09:40.134 Non-Operational Permissive Mode: Not Supported 00:09:40.134 NVM Sets: Not Supported 00:09:40.134 Read Recovery Levels: Not Supported 00:09:40.134 Endurance Groups: Not Supported 00:09:40.134 Predictable Latency Mode: Not Supported 00:09:40.134 Traffic Based Keep ALive: Not Supported 00:09:40.134 Namespace Granularity: Not Supported 00:09:40.134 SQ Associations: Not Supported 00:09:40.134 UUID List: Not Supported 00:09:40.134 Multi-Domain Subsystem: Not Supported 00:09:40.134 Fixed Capacity Management: Not Supported 00:09:40.134 Variable Capacity Management: Not Supported 00:09:40.134 Delete Endurance Group: Not Supported 00:09:40.134 Delete NVM Set: Not Supported 00:09:40.134 Extended LBA Formats Supported: Not Supported 00:09:40.134 Flexible Data Placement Supported: Not Supported 00:09:40.134 00:09:40.134 Controller Memory Buffer Support 00:09:40.134 ================================ 00:09:40.134 Supported: No 00:09:40.134 00:09:40.134 Persistent Memory Region Support 00:09:40.134 ================================ 00:09:40.134 Supported: No 00:09:40.134 00:09:40.134 Admin Command Set Attributes 00:09:40.134 ============================ 00:09:40.134 Security Send/Receive: Not Supported 00:09:40.134 Format NVM: Not Supported 00:09:40.134 Firmware Activate/Download: Not Supported 00:09:40.134 Namespace Management: Not Supported 00:09:40.134 Device Self-Test: Not Supported 00:09:40.134 Directives: Not Supported 00:09:40.134 NVMe-MI: Not Supported 00:09:40.134 Virtualization Management: Not Supported 00:09:40.134 Doorbell Buffer Config: Not Supported 00:09:40.134 Get LBA Status Capability: Not Supported 00:09:40.134 Command & Feature Lockdown Capability: Not Supported 00:09:40.134 Abort Command Limit: 4 00:09:40.134 Async Event Request Limit: 4 00:09:40.134 Number of Firmware Slots: N/A 00:09:40.134 Firmware Slot 1 Read-Only: N/A 00:09:40.134 Firmware Activation Without Reset: N/A 00:09:40.134 Multiple Update Detection Support: N/A 00:09:40.134 Firmware Update Granularity: No Information Provided 00:09:40.134 Per-Namespace SMART Log: No 00:09:40.134 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.134 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:40.134 Command Effects Log Page: Supported 00:09:40.135 Get Log Page Extended Data: Supported 00:09:40.135 Telemetry Log Pages: Not Supported 00:09:40.135 Persistent Event Log Pages: Not Supported 00:09:40.135 Supported Log Pages Log Page: May Support 00:09:40.135 Commands Supported & Effects Log Page: Not Supported 00:09:40.135 Feature Identifiers & Effects Log Page:May Support 00:09:40.135 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.135 Data Area 4 for Telemetry Log: Not Supported 00:09:40.135 Error Log Page Entries Supported: 128 00:09:40.135 Keep Alive: Supported 00:09:40.135 Keep Alive Granularity: 10000 ms 00:09:40.135 00:09:40.135 NVM Command Set Attributes 00:09:40.135 ========================== 00:09:40.135 Submission Queue Entry Size 00:09:40.135 Max: 64 00:09:40.135 Min: 64 00:09:40.135 Completion Queue Entry Size 00:09:40.135 Max: 16 00:09:40.135 Min: 16 00:09:40.135 Number of Namespaces: 32 00:09:40.135 Compare Command: Supported 00:09:40.135 Write Uncorrectable Command: Not Supported 00:09:40.135 Dataset Management Command: Supported 00:09:40.135 Write Zeroes Command: Supported 00:09:40.135 Set Features Save Field: Not Supported 00:09:40.135 Reservations: Not Supported 00:09:40.135 Timestamp: Not Supported 00:09:40.135 Copy: Supported 00:09:40.135 Volatile Write Cache: Present 00:09:40.135 Atomic Write Unit (Normal): 1 00:09:40.135 Atomic Write Unit (PFail): 1 00:09:40.135 Atomic Compare & Write Unit: 1 00:09:40.135 Fused Compare & Write: Supported 00:09:40.135 Scatter-Gather List 00:09:40.135 SGL Command Set: Supported (Dword aligned) 00:09:40.135 SGL Keyed: Not Supported 00:09:40.135 SGL Bit Bucket Descriptor: Not Supported 00:09:40.135 SGL Metadata Pointer: Not Supported 00:09:40.135 Oversized SGL: Not Supported 00:09:40.135 SGL Metadata Address: Not Supported 00:09:40.135 SGL Offset: Not Supported 00:09:40.135 Transport SGL Data Block: Not Supported 00:09:40.135 Replay Protected Memory Block: Not Supported 00:09:40.135 00:09:40.135 Firmware Slot Information 00:09:40.135 ========================= 00:09:40.135 Active slot: 1 00:09:40.135 Slot 1 Firmware Revision: 24.05 00:09:40.135 00:09:40.135 00:09:40.135 Commands Supported and Effects 00:09:40.135 ============================== 00:09:40.135 Admin Commands 00:09:40.135 -------------- 00:09:40.135 Get Log Page (02h): Supported 00:09:40.135 Identify (06h): Supported 00:09:40.135 Abort (08h): Supported 00:09:40.135 Set Features (09h): Supported 00:09:40.135 Get Features (0Ah): Supported 00:09:40.135 Asynchronous Event Request (0Ch): Supported 00:09:40.135 Keep Alive (18h): Supported 00:09:40.135 I/O Commands 00:09:40.135 ------------ 00:09:40.135 Flush (00h): Supported LBA-Change 00:09:40.135 Write (01h): Supported LBA-Change 00:09:40.135 Read (02h): Supported 00:09:40.135 Compare (05h): Supported 00:09:40.135 Write Zeroes (08h): Supported LBA-Change 00:09:40.135 Dataset Management (09h): Supported LBA-Change 00:09:40.135 Copy (19h): Supported LBA-Change 00:09:40.135 Unknown (79h): Supported LBA-Change 00:09:40.135 Unknown (7Ah): Supported 00:09:40.135 00:09:40.135 Error Log 00:09:40.135 ========= 00:09:40.135 00:09:40.135 Arbitration 00:09:40.135 =========== 00:09:40.135 Arbitration Burst: 1 00:09:40.135 00:09:40.135 Power Management 00:09:40.135 ================ 00:09:40.135 Number of Power States: 1 00:09:40.135 Current Power State: Power State #0 00:09:40.135 Power State #0: 00:09:40.135 Max Power: 0.00 W 00:09:40.135 Non-Operational State: Operational 00:09:40.135 Entry Latency: Not Reported 00:09:40.135 Exit Latency: Not Reported 00:09:40.135 Relative Read Throughput: 0 00:09:40.135 Relative Read Latency: 0 00:09:40.135 Relative Write Throughput: 0 00:09:40.135 Relative Write Latency: 0 00:09:40.135 Idle Power: Not Reported 00:09:40.135 Active Power: Not Reported 00:09:40.135 Non-Operational Permissive Mode: Not Supported 00:09:40.135 00:09:40.135 Health Information 00:09:40.135 ================== 00:09:40.135 Critical Warnings: 00:09:40.135 Available Spare Space: OK 00:09:40.135 Temperature: OK 00:09:40.135 Device Reliability: OK 00:09:40.135 Read Only: No 00:09:40.135 Volatile Memory Backup: OK 00:09:40.135 Current Temperature: 0 Kelvin (-2[2024-05-15 04:10:27.901083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:40.135 [2024-05-15 04:10:27.901100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:40.135 [2024-05-15 04:10:27.901142] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:40.135 [2024-05-15 04:10:27.901160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.135 [2024-05-15 04:10:27.901172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.135 [2024-05-15 04:10:27.901182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.135 [2024-05-15 04:10:27.901192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.135 [2024-05-15 04:10:27.903940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:40.135 [2024-05-15 04:10:27.903964] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:40.135 [2024-05-15 04:10:27.904625] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:40.135 [2024-05-15 04:10:27.904711] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:40.135 [2024-05-15 04:10:27.904725] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:40.135 [2024-05-15 04:10:27.905632] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:40.135 [2024-05-15 04:10:27.905656] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:40.135 [2024-05-15 04:10:27.905714] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:40.135 [2024-05-15 04:10:27.907669] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:40.135 73 Celsius) 00:09:40.135 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:40.135 Available Spare: 0% 00:09:40.135 Available Spare Threshold: 0% 00:09:40.135 Life Percentage Used: 0% 00:09:40.135 Data Units Read: 0 00:09:40.135 Data Units Written: 0 00:09:40.135 Host Read Commands: 0 00:09:40.135 Host Write Commands: 0 00:09:40.135 Controller Busy Time: 0 minutes 00:09:40.135 Power Cycles: 0 00:09:40.135 Power On Hours: 0 hours 00:09:40.135 Unsafe Shutdowns: 0 00:09:40.135 Unrecoverable Media Errors: 0 00:09:40.135 Lifetime Error Log Entries: 0 00:09:40.135 Warning Temperature Time: 0 minutes 00:09:40.135 Critical Temperature Time: 0 minutes 00:09:40.135 00:09:40.135 Number of Queues 00:09:40.135 ================ 00:09:40.135 Number of I/O Submission Queues: 127 00:09:40.135 Number of I/O Completion Queues: 127 00:09:40.135 00:09:40.135 Active Namespaces 00:09:40.135 ================= 00:09:40.135 Namespace ID:1 00:09:40.135 Error Recovery Timeout: Unlimited 00:09:40.135 Command Set Identifier: NVM (00h) 00:09:40.135 Deallocate: Supported 00:09:40.135 Deallocated/Unwritten Error: Not Supported 00:09:40.135 Deallocated Read Value: Unknown 00:09:40.135 Deallocate in Write Zeroes: Not Supported 00:09:40.135 Deallocated Guard Field: 0xFFFF 00:09:40.135 Flush: Supported 00:09:40.135 Reservation: Supported 00:09:40.135 Namespace Sharing Capabilities: Multiple Controllers 00:09:40.135 Size (in LBAs): 131072 (0GiB) 00:09:40.135 Capacity (in LBAs): 131072 (0GiB) 00:09:40.135 Utilization (in LBAs): 131072 (0GiB) 00:09:40.135 NGUID: A1030108ADC641CDB0D743E69464191C 00:09:40.135 UUID: a1030108-adc6-41cd-b0d7-43e69464191c 00:09:40.135 Thin Provisioning: Not Supported 00:09:40.135 Per-NS Atomic Units: Yes 00:09:40.135 Atomic Boundary Size (Normal): 0 00:09:40.135 Atomic Boundary Size (PFail): 0 00:09:40.135 Atomic Boundary Offset: 0 00:09:40.135 Maximum Single Source Range Length: 65535 00:09:40.135 Maximum Copy Length: 65535 00:09:40.135 Maximum Source Range Count: 1 00:09:40.135 NGUID/EUI64 Never Reused: No 00:09:40.135 Namespace Write Protected: No 00:09:40.135 Number of LBA Formats: 1 00:09:40.135 Current LBA Format: LBA Format #00 00:09:40.135 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.135 00:09:40.135 04:10:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:40.135 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.135 [2024-05-15 04:10:28.136828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:45.399 Initializing NVMe Controllers 00:09:45.399 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:45.399 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:45.399 Initialization complete. Launching workers. 00:09:45.399 ======================================================== 00:09:45.399 Latency(us) 00:09:45.399 Device Information : IOPS MiB/s Average min max 00:09:45.399 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33002.59 128.92 3880.03 1185.14 7566.27 00:09:45.399 ======================================================== 00:09:45.399 Total : 33002.59 128.92 3880.03 1185.14 7566.27 00:09:45.399 00:09:45.399 [2024-05-15 04:10:33.160141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:45.399 04:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:45.399 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.399 [2024-05-15 04:10:33.405346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:50.747 Initializing NVMe Controllers 00:09:50.747 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:50.747 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:50.747 Initialization complete. Launching workers. 00:09:50.747 ======================================================== 00:09:50.747 Latency(us) 00:09:50.747 Device Information : IOPS MiB/s Average min max 00:09:50.747 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15999.97 62.50 8006.72 5984.61 15972.27 00:09:50.747 ======================================================== 00:09:50.747 Total : 15999.97 62.50 8006.72 5984.61 15972.27 00:09:50.747 00:09:50.747 [2024-05-15 04:10:38.440522] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:50.747 04:10:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:50.747 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.747 [2024-05-15 04:10:38.667624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:56.016 [2024-05-15 04:10:43.736367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:56.016 Initializing NVMe Controllers 00:09:56.016 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:56.016 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:56.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:56.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:56.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:56.016 Initialization complete. Launching workers. 00:09:56.016 Starting thread on core 2 00:09:56.016 Starting thread on core 3 00:09:56.016 Starting thread on core 1 00:09:56.016 04:10:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:56.016 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.275 [2024-05-15 04:10:44.058466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:59.563 [2024-05-15 04:10:47.126304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:59.563 Initializing NVMe Controllers 00:09:59.563 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.563 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.563 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:59.563 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:59.563 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:59.563 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:59.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:59.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:59.563 Initialization complete. Launching workers. 00:09:59.563 Starting thread on core 1 with urgent priority queue 00:09:59.563 Starting thread on core 2 with urgent priority queue 00:09:59.563 Starting thread on core 3 with urgent priority queue 00:09:59.563 Starting thread on core 0 with urgent priority queue 00:09:59.563 SPDK bdev Controller (SPDK1 ) core 0: 5441.00 IO/s 18.38 secs/100000 ios 00:09:59.563 SPDK bdev Controller (SPDK1 ) core 1: 5459.67 IO/s 18.32 secs/100000 ios 00:09:59.563 SPDK bdev Controller (SPDK1 ) core 2: 5753.00 IO/s 17.38 secs/100000 ios 00:09:59.563 SPDK bdev Controller (SPDK1 ) core 3: 5944.33 IO/s 16.82 secs/100000 ios 00:09:59.563 ======================================================== 00:09:59.563 00:09:59.563 04:10:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:59.563 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.563 [2024-05-15 04:10:47.431482] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:59.563 Initializing NVMe Controllers 00:09:59.563 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.563 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.563 Namespace ID: 1 size: 0GB 00:09:59.563 Initialization complete. 00:09:59.563 INFO: using host memory buffer for IO 00:09:59.563 Hello world! 00:09:59.563 [2024-05-15 04:10:47.466121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:59.563 04:10:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:59.563 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.821 [2024-05-15 04:10:47.775391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.196 Initializing NVMe Controllers 00:10:01.196 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.196 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.196 Initialization complete. Launching workers. 00:10:01.196 submit (in ns) avg, min, max = 8355.3, 3531.1, 4016998.9 00:10:01.196 complete (in ns) avg, min, max = 25707.6, 2076.7, 4015734.4 00:10:01.196 00:10:01.196 Submit histogram 00:10:01.196 ================ 00:10:01.196 Range in us Cumulative Count 00:10:01.196 3.508 - 3.532: 0.0078% ( 1) 00:10:01.196 3.532 - 3.556: 0.5237% ( 66) 00:10:01.196 3.556 - 3.579: 2.0478% ( 195) 00:10:01.196 3.579 - 3.603: 5.9247% ( 496) 00:10:01.196 3.603 - 3.627: 11.7321% ( 743) 00:10:01.196 3.627 - 3.650: 20.1892% ( 1082) 00:10:01.196 3.650 - 3.674: 28.1460% ( 1018) 00:10:01.196 3.674 - 3.698: 35.2743% ( 912) 00:10:01.196 3.698 - 3.721: 41.9572% ( 855) 00:10:01.196 3.721 - 3.745: 48.4993% ( 837) 00:10:01.196 3.745 - 3.769: 53.6267% ( 656) 00:10:01.196 3.769 - 3.793: 57.8318% ( 538) 00:10:01.196 3.793 - 3.816: 61.2944% ( 443) 00:10:01.196 3.816 - 3.840: 64.4990% ( 410) 00:10:01.196 3.840 - 3.864: 68.5087% ( 513) 00:10:01.196 3.864 - 3.887: 72.8857% ( 560) 00:10:01.196 3.887 - 3.911: 76.9189% ( 516) 00:10:01.196 3.911 - 3.935: 80.8113% ( 498) 00:10:01.196 3.935 - 3.959: 83.7893% ( 381) 00:10:01.196 3.959 - 3.982: 86.2514% ( 315) 00:10:01.196 3.982 - 4.006: 88.0100% ( 225) 00:10:01.196 4.006 - 4.030: 89.5107% ( 192) 00:10:01.196 4.030 - 4.053: 90.4721% ( 123) 00:10:01.196 4.053 - 4.077: 91.5898% ( 143) 00:10:01.196 4.077 - 4.101: 92.5590% ( 124) 00:10:01.196 4.101 - 4.124: 93.3406% ( 100) 00:10:01.196 4.124 - 4.148: 94.1066% ( 98) 00:10:01.196 4.148 - 4.172: 94.8413% ( 94) 00:10:01.196 4.172 - 4.196: 95.3338% ( 63) 00:10:01.196 4.196 - 4.219: 95.7871% ( 58) 00:10:01.196 4.219 - 4.243: 96.0997% ( 40) 00:10:01.196 4.243 - 4.267: 96.2873% ( 24) 00:10:01.196 4.267 - 4.290: 96.4984% ( 27) 00:10:01.196 4.290 - 4.314: 96.6390% ( 18) 00:10:01.196 4.314 - 4.338: 96.8110% ( 22) 00:10:01.196 4.338 - 4.361: 96.9908% ( 23) 00:10:01.196 4.361 - 4.385: 97.0846% ( 12) 00:10:01.196 4.385 - 4.409: 97.1862% ( 13) 00:10:01.196 4.409 - 4.433: 97.2331% ( 6) 00:10:01.196 4.433 - 4.456: 97.2643% ( 4) 00:10:01.196 4.456 - 4.480: 97.3112% ( 6) 00:10:01.196 4.480 - 4.504: 97.3503% ( 5) 00:10:01.196 4.504 - 4.527: 97.3894% ( 5) 00:10:01.196 4.527 - 4.551: 97.3972% ( 1) 00:10:01.196 4.551 - 4.575: 97.4050% ( 1) 00:10:01.196 4.575 - 4.599: 97.4128% ( 1) 00:10:01.196 4.599 - 4.622: 97.4207% ( 1) 00:10:01.196 4.622 - 4.646: 97.4285% ( 1) 00:10:01.196 4.646 - 4.670: 97.4441% ( 2) 00:10:01.196 4.670 - 4.693: 97.4754% ( 4) 00:10:01.196 4.693 - 4.717: 97.4832% ( 1) 00:10:01.196 4.764 - 4.788: 97.4910% ( 1) 00:10:01.196 4.788 - 4.812: 97.4988% ( 1) 00:10:01.196 4.836 - 4.859: 97.5145% ( 2) 00:10:01.196 4.859 - 4.883: 97.5457% ( 4) 00:10:01.196 4.883 - 4.907: 97.5535% ( 1) 00:10:01.196 4.907 - 4.930: 97.6004% ( 6) 00:10:01.196 4.930 - 4.954: 97.6317% ( 4) 00:10:01.196 4.954 - 4.978: 97.7020% ( 9) 00:10:01.196 4.978 - 5.001: 97.7411% ( 5) 00:10:01.196 5.001 - 5.025: 97.7802% ( 5) 00:10:01.196 5.025 - 5.049: 97.7958% ( 2) 00:10:01.196 5.049 - 5.073: 97.8271% ( 4) 00:10:01.196 5.073 - 5.096: 97.8818% ( 7) 00:10:01.196 5.096 - 5.120: 97.8975% ( 2) 00:10:01.196 5.120 - 5.144: 97.9443% ( 6) 00:10:01.196 5.144 - 5.167: 98.0069% ( 8) 00:10:01.196 5.167 - 5.191: 98.0147% ( 1) 00:10:01.196 5.191 - 5.215: 98.0303% ( 2) 00:10:01.196 5.215 - 5.239: 98.0616% ( 4) 00:10:01.196 5.239 - 5.262: 98.0850% ( 3) 00:10:01.196 5.262 - 5.286: 98.1007% ( 2) 00:10:01.196 5.286 - 5.310: 98.1319% ( 4) 00:10:01.196 5.333 - 5.357: 98.1476% ( 2) 00:10:01.196 5.357 - 5.381: 98.1866% ( 5) 00:10:01.196 5.381 - 5.404: 98.2023% ( 2) 00:10:01.196 5.404 - 5.428: 98.2257% ( 3) 00:10:01.196 5.428 - 5.452: 98.2335% ( 1) 00:10:01.196 5.476 - 5.499: 98.2414% ( 1) 00:10:01.196 5.499 - 5.523: 98.2492% ( 1) 00:10:01.196 5.523 - 5.547: 98.2570% ( 1) 00:10:01.196 5.547 - 5.570: 98.2726% ( 2) 00:10:01.196 5.665 - 5.689: 98.2804% ( 1) 00:10:01.196 5.760 - 5.784: 98.2961% ( 2) 00:10:01.196 5.855 - 5.879: 98.3039% ( 1) 00:10:01.196 5.879 - 5.902: 98.3117% ( 1) 00:10:01.196 6.116 - 6.163: 98.3195% ( 1) 00:10:01.196 6.400 - 6.447: 98.3273% ( 1) 00:10:01.196 6.684 - 6.732: 98.3352% ( 1) 00:10:01.196 6.732 - 6.779: 98.3430% ( 1) 00:10:01.196 6.779 - 6.827: 98.3508% ( 1) 00:10:01.196 6.827 - 6.874: 98.3664% ( 2) 00:10:01.196 6.874 - 6.921: 98.3742% ( 1) 00:10:01.196 7.064 - 7.111: 98.3821% ( 1) 00:10:01.196 7.111 - 7.159: 98.3977% ( 2) 00:10:01.196 7.159 - 7.206: 98.4055% ( 1) 00:10:01.196 7.206 - 7.253: 98.4290% ( 3) 00:10:01.196 7.253 - 7.301: 98.4446% ( 2) 00:10:01.196 7.348 - 7.396: 98.4524% ( 1) 00:10:01.196 7.443 - 7.490: 98.4680% ( 2) 00:10:01.196 7.490 - 7.538: 98.4758% ( 1) 00:10:01.196 7.585 - 7.633: 98.4837% ( 1) 00:10:01.196 7.633 - 7.680: 98.4993% ( 2) 00:10:01.196 7.680 - 7.727: 98.5227% ( 3) 00:10:01.196 7.727 - 7.775: 98.5306% ( 1) 00:10:01.196 7.775 - 7.822: 98.5384% ( 1) 00:10:01.196 7.822 - 7.870: 98.5462% ( 1) 00:10:01.196 7.870 - 7.917: 98.5540% ( 1) 00:10:01.196 7.917 - 7.964: 98.5618% ( 1) 00:10:01.196 8.107 - 8.154: 98.5696% ( 1) 00:10:01.196 8.154 - 8.201: 98.5775% ( 1) 00:10:01.196 8.391 - 8.439: 98.5853% ( 1) 00:10:01.196 8.581 - 8.628: 98.5931% ( 1) 00:10:01.196 8.676 - 8.723: 98.6009% ( 1) 00:10:01.196 8.818 - 8.865: 98.6087% ( 1) 00:10:01.196 8.865 - 8.913: 98.6165% ( 1) 00:10:01.196 9.055 - 9.102: 98.6244% ( 1) 00:10:01.196 9.150 - 9.197: 98.6478% ( 3) 00:10:01.196 9.197 - 9.244: 98.6556% ( 1) 00:10:01.196 9.481 - 9.529: 98.6634% ( 1) 00:10:01.196 9.529 - 9.576: 98.6713% ( 1) 00:10:01.196 9.671 - 9.719: 98.6791% ( 1) 00:10:01.196 9.766 - 9.813: 98.6869% ( 1) 00:10:01.196 9.956 - 10.003: 98.6947% ( 1) 00:10:01.197 10.335 - 10.382: 98.7025% ( 1) 00:10:01.197 10.619 - 10.667: 98.7103% ( 1) 00:10:01.197 10.856 - 10.904: 98.7181% ( 1) 00:10:01.197 10.904 - 10.951: 98.7260% ( 1) 00:10:01.197 10.951 - 10.999: 98.7338% ( 1) 00:10:01.197 11.093 - 11.141: 98.7416% ( 1) 00:10:01.197 11.188 - 11.236: 98.7572% ( 2) 00:10:01.197 11.330 - 11.378: 98.7729% ( 2) 00:10:01.197 11.567 - 11.615: 98.7807% ( 1) 00:10:01.197 11.804 - 11.852: 98.7885% ( 1) 00:10:01.197 12.326 - 12.421: 98.7963% ( 1) 00:10:01.197 12.516 - 12.610: 98.8041% ( 1) 00:10:01.197 12.610 - 12.705: 98.8119% ( 1) 00:10:01.197 13.084 - 13.179: 98.8198% ( 1) 00:10:01.197 13.179 - 13.274: 98.8276% ( 1) 00:10:01.197 13.274 - 13.369: 98.8432% ( 2) 00:10:01.197 13.369 - 13.464: 98.8510% ( 1) 00:10:01.197 13.653 - 13.748: 98.8588% ( 1) 00:10:01.197 13.748 - 13.843: 98.8667% ( 1) 00:10:01.197 13.938 - 14.033: 98.8745% ( 1) 00:10:01.197 14.222 - 14.317: 98.8901% ( 2) 00:10:01.197 14.317 - 14.412: 98.8979% ( 1) 00:10:01.197 14.507 - 14.601: 98.9057% ( 1) 00:10:01.197 17.256 - 17.351: 98.9214% ( 2) 00:10:01.197 17.351 - 17.446: 98.9292% ( 1) 00:10:01.197 17.446 - 17.541: 98.9526% ( 3) 00:10:01.197 17.541 - 17.636: 98.9683% ( 2) 00:10:01.197 17.636 - 17.730: 99.0152% ( 6) 00:10:01.197 17.730 - 17.825: 99.0699% ( 7) 00:10:01.197 17.825 - 17.920: 99.1246% ( 7) 00:10:01.197 17.920 - 18.015: 99.1871% ( 8) 00:10:01.197 18.015 - 18.110: 99.2887% ( 13) 00:10:01.197 18.110 - 18.204: 99.3669% ( 10) 00:10:01.197 18.204 - 18.299: 99.4216% ( 7) 00:10:01.197 18.299 - 18.394: 99.4685% ( 6) 00:10:01.197 18.394 - 18.489: 99.5232% ( 7) 00:10:01.197 18.489 - 18.584: 99.5388% ( 2) 00:10:01.197 18.584 - 18.679: 99.6092% ( 9) 00:10:01.197 18.679 - 18.773: 99.6405% ( 4) 00:10:01.197 18.773 - 18.868: 99.6795% ( 5) 00:10:01.197 18.963 - 19.058: 99.6874% ( 1) 00:10:01.197 19.058 - 19.153: 99.7343% ( 6) 00:10:01.197 19.153 - 19.247: 99.7811% ( 6) 00:10:01.197 19.247 - 19.342: 99.7968% ( 2) 00:10:01.197 19.342 - 19.437: 99.8046% ( 1) 00:10:01.197 19.437 - 19.532: 99.8437% ( 5) 00:10:01.197 19.627 - 19.721: 99.8515% ( 1) 00:10:01.197 19.721 - 19.816: 99.8593% ( 1) 00:10:01.197 19.816 - 19.911: 99.8671% ( 1) 00:10:01.197 19.911 - 20.006: 99.8749% ( 1) 00:10:01.197 21.713 - 21.807: 99.8828% ( 1) 00:10:01.197 30.341 - 30.530: 99.8906% ( 1) 00:10:01.197 3980.705 - 4004.978: 99.9844% ( 12) 00:10:01.197 4004.978 - 4029.250: 100.0000% ( 2) 00:10:01.197 00:10:01.197 Complete histogram 00:10:01.197 ================== 00:10:01.197 Range in us Cumulative Count 00:10:01.197 2.074 - 2.086: 3.6423% ( 466) 00:10:01.197 2.086 - 2.098: 28.4899% ( 3179) 00:10:01.197 2.098 - 2.110: 34.9148% ( 822) 00:10:01.197 2.110 - 2.121: 44.0597% ( 1170) 00:10:01.197 2.121 - 2.133: 55.8543% ( 1509) 00:10:01.197 2.133 - 2.145: 57.8396% ( 254) 00:10:01.197 2.145 - 2.157: 62.3964% ( 583) 00:10:01.197 2.157 - 2.169: 69.6420% ( 927) 00:10:01.197 2.169 - 2.181: 71.3694% ( 221) 00:10:01.197 2.181 - 2.193: 75.3400% ( 508) 00:10:01.197 2.193 - 2.204: 79.5060% ( 533) 00:10:01.197 2.204 - 2.216: 80.2955% ( 101) 00:10:01.197 2.216 - 2.228: 82.6559% ( 302) 00:10:01.197 2.228 - 2.240: 87.3925% ( 606) 00:10:01.197 2.240 - 2.252: 88.6666% ( 163) 00:10:01.197 2.252 - 2.264: 90.6050% ( 248) 00:10:01.197 2.264 - 2.276: 92.5590% ( 250) 00:10:01.197 2.276 - 2.287: 93.0749% ( 66) 00:10:01.197 2.287 - 2.299: 93.7705% ( 89) 00:10:01.197 2.299 - 2.311: 94.4662% ( 89) 00:10:01.197 2.311 - 2.323: 94.8726% ( 52) 00:10:01.197 2.323 - 2.335: 95.0211% ( 19) 00:10:01.197 2.335 - 2.347: 95.1305% ( 14) 00:10:01.197 2.347 - 2.359: 95.2243% ( 12) 00:10:01.197 2.359 - 2.370: 95.3963% ( 22) 00:10:01.197 2.370 - 2.382: 95.6933% ( 38) 00:10:01.197 2.382 - 2.394: 96.0997% ( 52) 00:10:01.197 2.394 - 2.406: 96.4593% ( 46) 00:10:01.197 2.406 - 2.418: 96.7094% ( 32) 00:10:01.197 2.418 - 2.430: 96.9830% ( 35) 00:10:01.197 2.430 - 2.441: 97.1784% ( 25) 00:10:01.197 2.441 - 2.453: 97.3816% ( 26) 00:10:01.197 2.453 - 2.465: 97.6317% ( 32) 00:10:01.197 2.465 - 2.477: 97.7255% ( 12) 00:10:01.197 2.477 - 2.489: 97.8271% ( 13) 00:10:01.197 2.489 - 2.501: 97.8818% ( 7) 00:10:01.197 2.501 - 2.513: 97.9522% ( 9) 00:10:01.197 2.513 - 2.524: 98.0303% ( 10) 00:10:01.197 2.524 - 2.536: 98.1319% ( 13) 00:10:01.197 2.536 - 2.548: 98.1945% ( 8) 00:10:01.197 2.548 - 2.560: 98.2414% ( 6) 00:10:01.197 2.560 - 2.572: 98.2883% ( 6) 00:10:01.197 2.572 - 2.584: 98.3273% ( 5) 00:10:01.197 2.584 - 2.596: 98.3430% ( 2) 00:10:01.197 2.596 - 2.607: 98.3586% ( 2) 00:10:01.197 2.607 - 2.619: 98.3742% ( 2) 00:10:01.197 2.631 - 2.643: 98.3899% ( 2) 00:10:01.197 2.690 - 2.702: 98.3977% ( 1) 00:10:01.197 2.702 - 2.714: 98.4055% ( 1) 00:10:01.197 2.750 - 2.761: 98.4133% ( 1) 00:10:01.197 3.022 - 3.034: 98.4211% ( 1) 00:10:01.197 3.129 - 3.153: 98.4290% ( 1) 00:10:01.197 3.271 - 3.295: 98.4446% ( 2) 00:10:01.197 3.295 - 3.319: 98.4524% ( 1) 00:10:01.197 3.413 - 3.437: 98.4758% ( 3) 00:10:01.197 3.461 - 3.484: 98.4915% ( 2) 00:10:01.197 3.484 - 3.508: 98.4993% ( 1) 00:10:01.197 3.532 - 3.556: 98.5227% ( 3) 00:10:01.197 3.627 - 3.650: 98.5306% ( 1) 00:10:01.197 3.650 - 3.674: 98.5384% ( 1) 00:10:01.197 3.935 - 3.959: 98.5540% ( 2) 00:10:01.197 3.959 - 3.982: 98.5618% ( 1) 00:10:01.197 3.982 - 4.006: 98.5696% ( 1) 00:10:01.197 4.101 - 4.124: 98.5775% ( 1) 00:10:01.197 4.124 - 4.148: 98.5853% ( 1) 00:10:01.197 4.883 - 4.907: 98.6009% ( 2) 00:10:01.197 4.930 - 4.954: 98.6087% ( 1) 00:10:01.197 5.167 - 5.191: 98.6165% ( 1) 00:10:01.197 5.333 - 5.357: 98.6244% ( 1) 00:10:01.197 5.523 - 5.547: 98.6322% ( 1) 00:10:01.197 5.760 - 5.784: 98.6400% ( 1) 00:10:01.197 5.784 - 5.807: 98.6478% ( 1) 00:10:01.197 5.807 - 5.831: 98.6556% ( 1) 00:10:01.197 5.855 - 5.879: 9[2024-05-15 04:10:48.795561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:01.197 8.6634% ( 1) 00:10:01.197 5.879 - 5.902: 98.6713% ( 1) 00:10:01.197 5.902 - 5.926: 98.6791% ( 1) 00:10:01.197 5.926 - 5.950: 98.6869% ( 1) 00:10:01.197 6.116 - 6.163: 98.7025% ( 2) 00:10:01.197 6.163 - 6.210: 98.7103% ( 1) 00:10:01.197 6.210 - 6.258: 98.7181% ( 1) 00:10:01.197 6.400 - 6.447: 98.7260% ( 1) 00:10:01.197 6.542 - 6.590: 98.7338% ( 1) 00:10:01.197 6.590 - 6.637: 98.7416% ( 1) 00:10:01.197 6.637 - 6.684: 98.7494% ( 1) 00:10:01.197 6.732 - 6.779: 98.7650% ( 2) 00:10:01.197 7.111 - 7.159: 98.7729% ( 1) 00:10:01.197 7.490 - 7.538: 98.7807% ( 1) 00:10:01.197 15.550 - 15.644: 98.7885% ( 1) 00:10:01.197 15.644 - 15.739: 98.8119% ( 3) 00:10:01.197 15.739 - 15.834: 98.8198% ( 1) 00:10:01.197 15.834 - 15.929: 98.8276% ( 1) 00:10:01.197 15.929 - 16.024: 98.8667% ( 5) 00:10:01.197 16.024 - 16.119: 98.8979% ( 4) 00:10:01.197 16.119 - 16.213: 98.9448% ( 6) 00:10:01.197 16.213 - 16.308: 98.9917% ( 6) 00:10:01.197 16.308 - 16.403: 99.0152% ( 3) 00:10:01.197 16.403 - 16.498: 99.0308% ( 2) 00:10:01.197 16.498 - 16.593: 99.0699% ( 5) 00:10:01.197 16.593 - 16.687: 99.1090% ( 5) 00:10:01.197 16.687 - 16.782: 99.1559% ( 6) 00:10:01.197 16.782 - 16.877: 99.1871% ( 4) 00:10:01.197 16.877 - 16.972: 99.2106% ( 3) 00:10:01.197 16.972 - 17.067: 99.2340% ( 3) 00:10:01.197 17.067 - 17.161: 99.2731% ( 5) 00:10:01.197 17.161 - 17.256: 99.2887% ( 2) 00:10:01.197 17.351 - 17.446: 99.2965% ( 1) 00:10:01.197 17.541 - 17.636: 99.3122% ( 2) 00:10:01.197 17.636 - 17.730: 99.3200% ( 1) 00:10:01.197 17.730 - 17.825: 99.3356% ( 2) 00:10:01.197 17.825 - 17.920: 99.3513% ( 2) 00:10:01.197 17.920 - 18.015: 99.3591% ( 1) 00:10:01.197 18.015 - 18.110: 99.3669% ( 1) 00:10:01.197 18.110 - 18.204: 99.3747% ( 1) 00:10:01.197 18.394 - 18.489: 99.3825% ( 1) 00:10:01.197 18.489 - 18.584: 99.3982% ( 2) 00:10:01.197 19.058 - 19.153: 99.4060% ( 1) 00:10:01.197 21.239 - 21.333: 99.4138% ( 1) 00:10:01.197 3980.705 - 4004.978: 99.8593% ( 57) 00:10:01.197 4004.978 - 4029.250: 100.0000% ( 18) 00:10:01.197 00:10:01.197 04:10:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:01.197 04:10:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:01.197 04:10:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:01.197 04:10:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:01.197 04:10:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:01.197 [ 00:10:01.197 { 00:10:01.197 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:01.197 "subtype": "Discovery", 00:10:01.197 "listen_addresses": [], 00:10:01.197 "allow_any_host": true, 00:10:01.197 "hosts": [] 00:10:01.197 }, 00:10:01.197 { 00:10:01.197 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:01.197 "subtype": "NVMe", 00:10:01.197 "listen_addresses": [ 00:10:01.197 { 00:10:01.197 "trtype": "VFIOUSER", 00:10:01.197 "adrfam": "IPv4", 00:10:01.197 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:01.197 "trsvcid": "0" 00:10:01.197 } 00:10:01.197 ], 00:10:01.197 "allow_any_host": true, 00:10:01.197 "hosts": [], 00:10:01.197 "serial_number": "SPDK1", 00:10:01.197 "model_number": "SPDK bdev Controller", 00:10:01.197 "max_namespaces": 32, 00:10:01.197 "min_cntlid": 1, 00:10:01.197 "max_cntlid": 65519, 00:10:01.197 "namespaces": [ 00:10:01.197 { 00:10:01.197 "nsid": 1, 00:10:01.197 "bdev_name": "Malloc1", 00:10:01.197 "name": "Malloc1", 00:10:01.197 "nguid": "A1030108ADC641CDB0D743E69464191C", 00:10:01.197 "uuid": "a1030108-adc6-41cd-b0d7-43e69464191c" 00:10:01.197 } 00:10:01.197 ] 00:10:01.197 }, 00:10:01.197 { 00:10:01.197 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:01.197 "subtype": "NVMe", 00:10:01.197 "listen_addresses": [ 00:10:01.197 { 00:10:01.197 "trtype": "VFIOUSER", 00:10:01.197 "adrfam": "IPv4", 00:10:01.197 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:01.197 "trsvcid": "0" 00:10:01.197 } 00:10:01.197 ], 00:10:01.197 "allow_any_host": true, 00:10:01.197 "hosts": [], 00:10:01.197 "serial_number": "SPDK2", 00:10:01.197 "model_number": "SPDK bdev Controller", 00:10:01.197 "max_namespaces": 32, 00:10:01.197 "min_cntlid": 1, 00:10:01.197 "max_cntlid": 65519, 00:10:01.197 "namespaces": [ 00:10:01.197 { 00:10:01.197 "nsid": 1, 00:10:01.197 "bdev_name": "Malloc2", 00:10:01.197 "name": "Malloc2", 00:10:01.197 "nguid": "A2DD282EADB04DE48DA9054701E6C318", 00:10:01.197 "uuid": "a2dd282e-adb0-4de4-8da9-054701e6c318" 00:10:01.197 } 00:10:01.197 ] 00:10:01.197 } 00:10:01.197 ] 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3319896 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:01.197 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:01.197 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.455 [2024-05-15 04:10:49.260458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.455 Malloc3 00:10:01.455 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:01.713 [2024-05-15 04:10:49.593876] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:01.713 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:01.713 Asynchronous Event Request test 00:10:01.713 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.713 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.713 Registering asynchronous event callbacks... 00:10:01.713 Starting namespace attribute notice tests for all controllers... 00:10:01.713 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:01.713 aer_cb - Changed Namespace 00:10:01.713 Cleaning up... 00:10:01.971 [ 00:10:01.971 { 00:10:01.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:01.971 "subtype": "Discovery", 00:10:01.971 "listen_addresses": [], 00:10:01.971 "allow_any_host": true, 00:10:01.971 "hosts": [] 00:10:01.971 }, 00:10:01.971 { 00:10:01.971 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:01.971 "subtype": "NVMe", 00:10:01.971 "listen_addresses": [ 00:10:01.971 { 00:10:01.971 "trtype": "VFIOUSER", 00:10:01.971 "adrfam": "IPv4", 00:10:01.971 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:01.971 "trsvcid": "0" 00:10:01.971 } 00:10:01.971 ], 00:10:01.971 "allow_any_host": true, 00:10:01.971 "hosts": [], 00:10:01.971 "serial_number": "SPDK1", 00:10:01.971 "model_number": "SPDK bdev Controller", 00:10:01.971 "max_namespaces": 32, 00:10:01.971 "min_cntlid": 1, 00:10:01.971 "max_cntlid": 65519, 00:10:01.971 "namespaces": [ 00:10:01.971 { 00:10:01.971 "nsid": 1, 00:10:01.971 "bdev_name": "Malloc1", 00:10:01.971 "name": "Malloc1", 00:10:01.971 "nguid": "A1030108ADC641CDB0D743E69464191C", 00:10:01.971 "uuid": "a1030108-adc6-41cd-b0d7-43e69464191c" 00:10:01.971 }, 00:10:01.971 { 00:10:01.971 "nsid": 2, 00:10:01.971 "bdev_name": "Malloc3", 00:10:01.971 "name": "Malloc3", 00:10:01.971 "nguid": "9BF415037A36480DB856E2E455E0957D", 00:10:01.971 "uuid": "9bf41503-7a36-480d-b856-e2e455e0957d" 00:10:01.971 } 00:10:01.971 ] 00:10:01.971 }, 00:10:01.971 { 00:10:01.971 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:01.971 "subtype": "NVMe", 00:10:01.971 "listen_addresses": [ 00:10:01.971 { 00:10:01.971 "trtype": "VFIOUSER", 00:10:01.971 "adrfam": "IPv4", 00:10:01.971 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:01.971 "trsvcid": "0" 00:10:01.971 } 00:10:01.971 ], 00:10:01.971 "allow_any_host": true, 00:10:01.971 "hosts": [], 00:10:01.971 "serial_number": "SPDK2", 00:10:01.971 "model_number": "SPDK bdev Controller", 00:10:01.971 "max_namespaces": 32, 00:10:01.971 "min_cntlid": 1, 00:10:01.971 "max_cntlid": 65519, 00:10:01.971 "namespaces": [ 00:10:01.971 { 00:10:01.971 "nsid": 1, 00:10:01.971 "bdev_name": "Malloc2", 00:10:01.971 "name": "Malloc2", 00:10:01.971 "nguid": "A2DD282EADB04DE48DA9054701E6C318", 00:10:01.971 "uuid": "a2dd282e-adb0-4de4-8da9-054701e6c318" 00:10:01.971 } 00:10:01.971 ] 00:10:01.971 } 00:10:01.971 ] 00:10:01.971 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3319896 00:10:01.971 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:01.971 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:01.971 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:01.971 04:10:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:01.971 [2024-05-15 04:10:49.893774] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:01.971 [2024-05-15 04:10:49.893811] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320027 ] 00:10:01.971 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.971 [2024-05-15 04:10:49.927978] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:01.971 [2024-05-15 04:10:49.930311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:01.971 [2024-05-15 04:10:49.930339] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f48eb36c000 00:10:01.971 [2024-05-15 04:10:49.931313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.932315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.933323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.934333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.935341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.936352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.937352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.938365] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:01.971 [2024-05-15 04:10:49.939373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:01.971 [2024-05-15 04:10:49.939398] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f48eb361000 00:10:01.971 [2024-05-15 04:10:49.940511] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:01.971 [2024-05-15 04:10:49.954700] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:01.971 [2024-05-15 04:10:49.954733] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:01.971 [2024-05-15 04:10:49.959859] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:01.971 [2024-05-15 04:10:49.959911] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:01.971 [2024-05-15 04:10:49.960043] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:01.971 [2024-05-15 04:10:49.960071] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:01.971 [2024-05-15 04:10:49.960083] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:01.971 [2024-05-15 04:10:49.960863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:01.971 [2024-05-15 04:10:49.960883] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:01.971 [2024-05-15 04:10:49.960896] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:01.971 [2024-05-15 04:10:49.961866] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:01.971 [2024-05-15 04:10:49.961887] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:01.971 [2024-05-15 04:10:49.961900] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:01.971 [2024-05-15 04:10:49.962871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:01.971 [2024-05-15 04:10:49.962891] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:01.971 [2024-05-15 04:10:49.963877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:01.971 [2024-05-15 04:10:49.963896] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:01.971 [2024-05-15 04:10:49.963924] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:01.971 [2024-05-15 04:10:49.963944] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:01.971 [2024-05-15 04:10:49.964056] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:01.971 [2024-05-15 04:10:49.964065] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:01.971 [2024-05-15 04:10:49.964074] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:01.972 [2024-05-15 04:10:49.964888] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:01.972 [2024-05-15 04:10:49.965892] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:01.972 [2024-05-15 04:10:49.966901] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:01.972 [2024-05-15 04:10:49.967889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:01.972 [2024-05-15 04:10:49.967976] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:01.972 [2024-05-15 04:10:49.968927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:01.972 [2024-05-15 04:10:49.968953] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:01.972 [2024-05-15 04:10:49.968963] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:01.972 [2024-05-15 04:10:49.968988] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:01.972 [2024-05-15 04:10:49.969002] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:01.972 [2024-05-15 04:10:49.969026] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:01.972 [2024-05-15 04:10:49.969036] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:01.972 [2024-05-15 04:10:49.969055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:01.972 [2024-05-15 04:10:49.976944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:01.972 [2024-05-15 04:10:49.976968] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:01.972 [2024-05-15 04:10:49.976978] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:01.972 [2024-05-15 04:10:49.976986] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:01.972 [2024-05-15 04:10:49.976993] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:01.972 [2024-05-15 04:10:49.977001] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:01.972 [2024-05-15 04:10:49.977009] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:01.972 [2024-05-15 04:10:49.977021] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:01.972 [2024-05-15 04:10:49.977039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:01.972 [2024-05-15 04:10:49.977059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:01.972 [2024-05-15 04:10:49.984941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:01.972 [2024-05-15 04:10:49.984970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.972 [2024-05-15 04:10:49.985002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.972 [2024-05-15 04:10:49.985015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.972 [2024-05-15 04:10:49.985027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.972 [2024-05-15 04:10:49.985036] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:01.972 [2024-05-15 04:10:49.985049] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:01.972 [2024-05-15 04:10:49.985063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:49.992941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:49.992960] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:02.231 [2024-05-15 04:10:49.992974] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:49.992988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:49.992999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:49.993014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.000939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.001019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.001039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.001053] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:02.231 [2024-05-15 04:10:50.001062] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:02.231 [2024-05-15 04:10:50.001073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.008941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.008980] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:02.231 [2024-05-15 04:10:50.009010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.009027] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.009043] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:02.231 [2024-05-15 04:10:50.009052] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:02.231 [2024-05-15 04:10:50.009063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.016941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.016971] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.016988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.017002] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:02.231 [2024-05-15 04:10:50.017011] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:02.231 [2024-05-15 04:10:50.017022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.024941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.024971] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.024986] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.025002] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.025014] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.025024] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.025033] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:02.231 [2024-05-15 04:10:50.025041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:02.231 [2024-05-15 04:10:50.025050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:02.231 [2024-05-15 04:10:50.025087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.032942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.032969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.039494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.039523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.047943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.047978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.055942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.055985] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:02.231 [2024-05-15 04:10:50.055996] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:02.231 [2024-05-15 04:10:50.056003] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:02.231 [2024-05-15 04:10:50.056009] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:02.231 [2024-05-15 04:10:50.056021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:02.231 [2024-05-15 04:10:50.056033] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:02.231 [2024-05-15 04:10:50.056042] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:02.231 [2024-05-15 04:10:50.056051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.056062] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:02.231 [2024-05-15 04:10:50.056071] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:02.231 [2024-05-15 04:10:50.056079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.056099] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:02.231 [2024-05-15 04:10:50.056109] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:02.231 [2024-05-15 04:10:50.056118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:02.231 [2024-05-15 04:10:50.063942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.063986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.064003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:02.231 [2024-05-15 04:10:50.064018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:02.231 ===================================================== 00:10:02.231 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:02.231 ===================================================== 00:10:02.231 Controller Capabilities/Features 00:10:02.231 ================================ 00:10:02.231 Vendor ID: 4e58 00:10:02.231 Subsystem Vendor ID: 4e58 00:10:02.231 Serial Number: SPDK2 00:10:02.231 Model Number: SPDK bdev Controller 00:10:02.231 Firmware Version: 24.05 00:10:02.231 Recommended Arb Burst: 6 00:10:02.231 IEEE OUI Identifier: 8d 6b 50 00:10:02.231 Multi-path I/O 00:10:02.231 May have multiple subsystem ports: Yes 00:10:02.231 May have multiple controllers: Yes 00:10:02.231 Associated with SR-IOV VF: No 00:10:02.231 Max Data Transfer Size: 131072 00:10:02.231 Max Number of Namespaces: 32 00:10:02.231 Max Number of I/O Queues: 127 00:10:02.231 NVMe Specification Version (VS): 1.3 00:10:02.231 NVMe Specification Version (Identify): 1.3 00:10:02.231 Maximum Queue Entries: 256 00:10:02.231 Contiguous Queues Required: Yes 00:10:02.231 Arbitration Mechanisms Supported 00:10:02.231 Weighted Round Robin: Not Supported 00:10:02.231 Vendor Specific: Not Supported 00:10:02.231 Reset Timeout: 15000 ms 00:10:02.231 Doorbell Stride: 4 bytes 00:10:02.231 NVM Subsystem Reset: Not Supported 00:10:02.231 Command Sets Supported 00:10:02.231 NVM Command Set: Supported 00:10:02.231 Boot Partition: Not Supported 00:10:02.231 Memory Page Size Minimum: 4096 bytes 00:10:02.231 Memory Page Size Maximum: 4096 bytes 00:10:02.231 Persistent Memory Region: Not Supported 00:10:02.231 Optional Asynchronous Events Supported 00:10:02.231 Namespace Attribute Notices: Supported 00:10:02.231 Firmware Activation Notices: Not Supported 00:10:02.231 ANA Change Notices: Not Supported 00:10:02.231 PLE Aggregate Log Change Notices: Not Supported 00:10:02.231 LBA Status Info Alert Notices: Not Supported 00:10:02.231 EGE Aggregate Log Change Notices: Not Supported 00:10:02.231 Normal NVM Subsystem Shutdown event: Not Supported 00:10:02.231 Zone Descriptor Change Notices: Not Supported 00:10:02.231 Discovery Log Change Notices: Not Supported 00:10:02.231 Controller Attributes 00:10:02.231 128-bit Host Identifier: Supported 00:10:02.231 Non-Operational Permissive Mode: Not Supported 00:10:02.231 NVM Sets: Not Supported 00:10:02.231 Read Recovery Levels: Not Supported 00:10:02.231 Endurance Groups: Not Supported 00:10:02.231 Predictable Latency Mode: Not Supported 00:10:02.231 Traffic Based Keep ALive: Not Supported 00:10:02.231 Namespace Granularity: Not Supported 00:10:02.231 SQ Associations: Not Supported 00:10:02.231 UUID List: Not Supported 00:10:02.231 Multi-Domain Subsystem: Not Supported 00:10:02.231 Fixed Capacity Management: Not Supported 00:10:02.231 Variable Capacity Management: Not Supported 00:10:02.231 Delete Endurance Group: Not Supported 00:10:02.231 Delete NVM Set: Not Supported 00:10:02.231 Extended LBA Formats Supported: Not Supported 00:10:02.231 Flexible Data Placement Supported: Not Supported 00:10:02.231 00:10:02.231 Controller Memory Buffer Support 00:10:02.231 ================================ 00:10:02.231 Supported: No 00:10:02.231 00:10:02.231 Persistent Memory Region Support 00:10:02.231 ================================ 00:10:02.231 Supported: No 00:10:02.231 00:10:02.231 Admin Command Set Attributes 00:10:02.231 ============================ 00:10:02.231 Security Send/Receive: Not Supported 00:10:02.231 Format NVM: Not Supported 00:10:02.231 Firmware Activate/Download: Not Supported 00:10:02.231 Namespace Management: Not Supported 00:10:02.231 Device Self-Test: Not Supported 00:10:02.231 Directives: Not Supported 00:10:02.231 NVMe-MI: Not Supported 00:10:02.231 Virtualization Management: Not Supported 00:10:02.231 Doorbell Buffer Config: Not Supported 00:10:02.231 Get LBA Status Capability: Not Supported 00:10:02.231 Command & Feature Lockdown Capability: Not Supported 00:10:02.231 Abort Command Limit: 4 00:10:02.231 Async Event Request Limit: 4 00:10:02.231 Number of Firmware Slots: N/A 00:10:02.231 Firmware Slot 1 Read-Only: N/A 00:10:02.231 Firmware Activation Without Reset: N/A 00:10:02.231 Multiple Update Detection Support: N/A 00:10:02.231 Firmware Update Granularity: No Information Provided 00:10:02.231 Per-Namespace SMART Log: No 00:10:02.231 Asymmetric Namespace Access Log Page: Not Supported 00:10:02.231 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:02.231 Command Effects Log Page: Supported 00:10:02.231 Get Log Page Extended Data: Supported 00:10:02.231 Telemetry Log Pages: Not Supported 00:10:02.231 Persistent Event Log Pages: Not Supported 00:10:02.231 Supported Log Pages Log Page: May Support 00:10:02.231 Commands Supported & Effects Log Page: Not Supported 00:10:02.232 Feature Identifiers & Effects Log Page:May Support 00:10:02.232 NVMe-MI Commands & Effects Log Page: May Support 00:10:02.232 Data Area 4 for Telemetry Log: Not Supported 00:10:02.232 Error Log Page Entries Supported: 128 00:10:02.232 Keep Alive: Supported 00:10:02.232 Keep Alive Granularity: 10000 ms 00:10:02.232 00:10:02.232 NVM Command Set Attributes 00:10:02.232 ========================== 00:10:02.232 Submission Queue Entry Size 00:10:02.232 Max: 64 00:10:02.232 Min: 64 00:10:02.232 Completion Queue Entry Size 00:10:02.232 Max: 16 00:10:02.232 Min: 16 00:10:02.232 Number of Namespaces: 32 00:10:02.232 Compare Command: Supported 00:10:02.232 Write Uncorrectable Command: Not Supported 00:10:02.232 Dataset Management Command: Supported 00:10:02.232 Write Zeroes Command: Supported 00:10:02.232 Set Features Save Field: Not Supported 00:10:02.232 Reservations: Not Supported 00:10:02.232 Timestamp: Not Supported 00:10:02.232 Copy: Supported 00:10:02.232 Volatile Write Cache: Present 00:10:02.232 Atomic Write Unit (Normal): 1 00:10:02.232 Atomic Write Unit (PFail): 1 00:10:02.232 Atomic Compare & Write Unit: 1 00:10:02.232 Fused Compare & Write: Supported 00:10:02.232 Scatter-Gather List 00:10:02.232 SGL Command Set: Supported (Dword aligned) 00:10:02.232 SGL Keyed: Not Supported 00:10:02.232 SGL Bit Bucket Descriptor: Not Supported 00:10:02.232 SGL Metadata Pointer: Not Supported 00:10:02.232 Oversized SGL: Not Supported 00:10:02.232 SGL Metadata Address: Not Supported 00:10:02.232 SGL Offset: Not Supported 00:10:02.232 Transport SGL Data Block: Not Supported 00:10:02.232 Replay Protected Memory Block: Not Supported 00:10:02.232 00:10:02.232 Firmware Slot Information 00:10:02.232 ========================= 00:10:02.232 Active slot: 1 00:10:02.232 Slot 1 Firmware Revision: 24.05 00:10:02.232 00:10:02.232 00:10:02.232 Commands Supported and Effects 00:10:02.232 ============================== 00:10:02.232 Admin Commands 00:10:02.232 -------------- 00:10:02.232 Get Log Page (02h): Supported 00:10:02.232 Identify (06h): Supported 00:10:02.232 Abort (08h): Supported 00:10:02.232 Set Features (09h): Supported 00:10:02.232 Get Features (0Ah): Supported 00:10:02.232 Asynchronous Event Request (0Ch): Supported 00:10:02.232 Keep Alive (18h): Supported 00:10:02.232 I/O Commands 00:10:02.232 ------------ 00:10:02.232 Flush (00h): Supported LBA-Change 00:10:02.232 Write (01h): Supported LBA-Change 00:10:02.232 Read (02h): Supported 00:10:02.232 Compare (05h): Supported 00:10:02.232 Write Zeroes (08h): Supported LBA-Change 00:10:02.232 Dataset Management (09h): Supported LBA-Change 00:10:02.232 Copy (19h): Supported LBA-Change 00:10:02.232 Unknown (79h): Supported LBA-Change 00:10:02.232 Unknown (7Ah): Supported 00:10:02.232 00:10:02.232 Error Log 00:10:02.232 ========= 00:10:02.232 00:10:02.232 Arbitration 00:10:02.232 =========== 00:10:02.232 Arbitration Burst: 1 00:10:02.232 00:10:02.232 Power Management 00:10:02.232 ================ 00:10:02.232 Number of Power States: 1 00:10:02.232 Current Power State: Power State #0 00:10:02.232 Power State #0: 00:10:02.232 Max Power: 0.00 W 00:10:02.232 Non-Operational State: Operational 00:10:02.232 Entry Latency: Not Reported 00:10:02.232 Exit Latency: Not Reported 00:10:02.232 Relative Read Throughput: 0 00:10:02.232 Relative Read Latency: 0 00:10:02.232 Relative Write Throughput: 0 00:10:02.232 Relative Write Latency: 0 00:10:02.232 Idle Power: Not Reported 00:10:02.232 Active Power: Not Reported 00:10:02.232 Non-Operational Permissive Mode: Not Supported 00:10:02.232 00:10:02.232 Health Information 00:10:02.232 ================== 00:10:02.232 Critical Warnings: 00:10:02.232 Available Spare Space: OK 00:10:02.232 Temperature: OK 00:10:02.232 Device Reliability: OK 00:10:02.232 Read Only: No 00:10:02.232 Volatile Memory Backup: OK 00:10:02.232 Current Temperature: 0 Kelvin (-2[2024-05-15 04:10:50.064157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:02.232 [2024-05-15 04:10:50.071943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:02.232 [2024-05-15 04:10:50.072014] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:02.232 [2024-05-15 04:10:50.072034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.232 [2024-05-15 04:10:50.072046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.232 [2024-05-15 04:10:50.072057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.232 [2024-05-15 04:10:50.072067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.232 [2024-05-15 04:10:50.072185] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:02.232 [2024-05-15 04:10:50.072227] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:02.232 [2024-05-15 04:10:50.073193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:02.232 [2024-05-15 04:10:50.073293] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:02.232 [2024-05-15 04:10:50.073309] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:02.232 [2024-05-15 04:10:50.074184] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:02.232 [2024-05-15 04:10:50.074209] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:02.232 [2024-05-15 04:10:50.074396] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:02.232 [2024-05-15 04:10:50.075616] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:02.232 73 Celsius) 00:10:02.232 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:02.232 Available Spare: 0% 00:10:02.232 Available Spare Threshold: 0% 00:10:02.232 Life Percentage Used: 0% 00:10:02.232 Data Units Read: 0 00:10:02.232 Data Units Written: 0 00:10:02.232 Host Read Commands: 0 00:10:02.232 Host Write Commands: 0 00:10:02.232 Controller Busy Time: 0 minutes 00:10:02.232 Power Cycles: 0 00:10:02.232 Power On Hours: 0 hours 00:10:02.232 Unsafe Shutdowns: 0 00:10:02.232 Unrecoverable Media Errors: 0 00:10:02.232 Lifetime Error Log Entries: 0 00:10:02.232 Warning Temperature Time: 0 minutes 00:10:02.232 Critical Temperature Time: 0 minutes 00:10:02.232 00:10:02.232 Number of Queues 00:10:02.232 ================ 00:10:02.232 Number of I/O Submission Queues: 127 00:10:02.232 Number of I/O Completion Queues: 127 00:10:02.232 00:10:02.232 Active Namespaces 00:10:02.232 ================= 00:10:02.232 Namespace ID:1 00:10:02.232 Error Recovery Timeout: Unlimited 00:10:02.232 Command Set Identifier: NVM (00h) 00:10:02.232 Deallocate: Supported 00:10:02.232 Deallocated/Unwritten Error: Not Supported 00:10:02.232 Deallocated Read Value: Unknown 00:10:02.232 Deallocate in Write Zeroes: Not Supported 00:10:02.232 Deallocated Guard Field: 0xFFFF 00:10:02.232 Flush: Supported 00:10:02.232 Reservation: Supported 00:10:02.232 Namespace Sharing Capabilities: Multiple Controllers 00:10:02.232 Size (in LBAs): 131072 (0GiB) 00:10:02.232 Capacity (in LBAs): 131072 (0GiB) 00:10:02.232 Utilization (in LBAs): 131072 (0GiB) 00:10:02.232 NGUID: A2DD282EADB04DE48DA9054701E6C318 00:10:02.232 UUID: a2dd282e-adb0-4de4-8da9-054701e6c318 00:10:02.232 Thin Provisioning: Not Supported 00:10:02.232 Per-NS Atomic Units: Yes 00:10:02.232 Atomic Boundary Size (Normal): 0 00:10:02.232 Atomic Boundary Size (PFail): 0 00:10:02.232 Atomic Boundary Offset: 0 00:10:02.232 Maximum Single Source Range Length: 65535 00:10:02.232 Maximum Copy Length: 65535 00:10:02.232 Maximum Source Range Count: 1 00:10:02.232 NGUID/EUI64 Never Reused: No 00:10:02.232 Namespace Write Protected: No 00:10:02.232 Number of LBA Formats: 1 00:10:02.232 Current LBA Format: LBA Format #00 00:10:02.232 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:02.232 00:10:02.232 04:10:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:02.232 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.489 [2024-05-15 04:10:50.306133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:07.758 Initializing NVMe Controllers 00:10:07.758 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:07.758 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:07.758 Initialization complete. Launching workers. 00:10:07.758 ======================================================== 00:10:07.758 Latency(us) 00:10:07.758 Device Information : IOPS MiB/s Average min max 00:10:07.758 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33987.48 132.76 3765.24 1192.69 9609.68 00:10:07.758 ======================================================== 00:10:07.758 Total : 33987.48 132.76 3765.24 1192.69 9609.68 00:10:07.758 00:10:07.758 [2024-05-15 04:10:55.410307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:07.758 04:10:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:07.758 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.758 [2024-05-15 04:10:55.646949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:13.030 Initializing NVMe Controllers 00:10:13.030 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:13.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:13.030 Initialization complete. Launching workers. 00:10:13.030 ======================================================== 00:10:13.030 Latency(us) 00:10:13.030 Device Information : IOPS MiB/s Average min max 00:10:13.030 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31174.58 121.78 4105.42 1225.40 8305.86 00:10:13.030 ======================================================== 00:10:13.030 Total : 31174.58 121.78 4105.42 1225.40 8305.86 00:10:13.030 00:10:13.030 [2024-05-15 04:11:00.668791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:13.030 04:11:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:13.030 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.030 [2024-05-15 04:11:00.887719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:18.344 [2024-05-15 04:11:06.024064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:18.344 Initializing NVMe Controllers 00:10:18.344 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:18.344 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:18.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:18.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:18.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:18.344 Initialization complete. Launching workers. 00:10:18.344 Starting thread on core 2 00:10:18.344 Starting thread on core 3 00:10:18.344 Starting thread on core 1 00:10:18.344 04:11:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:18.344 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.344 [2024-05-15 04:11:06.340473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:21.638 [2024-05-15 04:11:09.573641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:21.638 Initializing NVMe Controllers 00:10:21.638 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.638 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:21.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:21.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:21.638 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:21.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:21.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:21.638 Initialization complete. Launching workers. 00:10:21.638 Starting thread on core 1 with urgent priority queue 00:10:21.638 Starting thread on core 2 with urgent priority queue 00:10:21.638 Starting thread on core 3 with urgent priority queue 00:10:21.638 Starting thread on core 0 with urgent priority queue 00:10:21.638 SPDK bdev Controller (SPDK2 ) core 0: 4011.00 IO/s 24.93 secs/100000 ios 00:10:21.638 SPDK bdev Controller (SPDK2 ) core 1: 4011.67 IO/s 24.93 secs/100000 ios 00:10:21.638 SPDK bdev Controller (SPDK2 ) core 2: 4184.33 IO/s 23.90 secs/100000 ios 00:10:21.638 SPDK bdev Controller (SPDK2 ) core 3: 4069.00 IO/s 24.58 secs/100000 ios 00:10:21.638 ======================================================== 00:10:21.638 00:10:21.638 04:11:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:21.896 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.896 [2024-05-15 04:11:09.897452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:21.896 Initializing NVMe Controllers 00:10:21.896 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.896 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.896 Namespace ID: 1 size: 0GB 00:10:21.896 Initialization complete. 00:10:21.896 INFO: using host memory buffer for IO 00:10:21.896 Hello world! 00:10:21.896 [2024-05-15 04:11:09.906510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:22.154 04:11:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:22.154 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.412 [2024-05-15 04:11:10.234701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:23.346 Initializing NVMe Controllers 00:10:23.346 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:23.346 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:23.346 Initialization complete. Launching workers. 00:10:23.346 submit (in ns) avg, min, max = 9217.9, 3508.9, 4015990.0 00:10:23.346 complete (in ns) avg, min, max = 25279.0, 2075.6, 4015328.9 00:10:23.346 00:10:23.346 Submit histogram 00:10:23.346 ================ 00:10:23.346 Range in us Cumulative Count 00:10:23.346 3.508 - 3.532: 0.1311% ( 17) 00:10:23.346 3.532 - 3.556: 0.4702% ( 44) 00:10:23.346 3.556 - 3.579: 1.8347% ( 177) 00:10:23.346 3.579 - 3.603: 5.2035% ( 437) 00:10:23.346 3.603 - 3.627: 10.8079% ( 727) 00:10:23.346 3.627 - 3.650: 18.7712% ( 1033) 00:10:23.346 3.650 - 3.674: 28.7851% ( 1299) 00:10:23.346 3.674 - 3.698: 38.2979% ( 1234) 00:10:23.346 3.698 - 3.721: 46.8548% ( 1110) 00:10:23.346 3.721 - 3.745: 52.3204% ( 709) 00:10:23.346 3.745 - 3.769: 56.6297% ( 559) 00:10:23.346 3.769 - 3.793: 61.1085% ( 581) 00:10:23.346 3.793 - 3.816: 64.7626% ( 474) 00:10:23.346 3.816 - 3.840: 68.1005% ( 433) 00:10:23.346 3.840 - 3.864: 71.4385% ( 433) 00:10:23.346 3.864 - 3.887: 74.9923% ( 461) 00:10:23.346 3.887 - 3.911: 79.2707% ( 555) 00:10:23.346 3.911 - 3.935: 82.7860% ( 456) 00:10:23.346 3.935 - 3.959: 85.2606% ( 321) 00:10:23.346 3.959 - 3.982: 87.3574% ( 272) 00:10:23.346 3.982 - 4.006: 89.1536% ( 233) 00:10:23.346 4.006 - 4.030: 90.4024% ( 162) 00:10:23.347 4.030 - 4.053: 91.5048% ( 143) 00:10:23.347 4.053 - 4.077: 92.5224% ( 132) 00:10:23.347 4.077 - 4.101: 93.2932% ( 100) 00:10:23.347 4.101 - 4.124: 94.0333% ( 96) 00:10:23.347 4.124 - 4.148: 94.7425% ( 92) 00:10:23.347 4.148 - 4.172: 95.3515% ( 79) 00:10:23.347 4.172 - 4.196: 95.6907% ( 44) 00:10:23.347 4.196 - 4.219: 95.9837% ( 38) 00:10:23.347 4.219 - 4.243: 96.1841% ( 26) 00:10:23.347 4.243 - 4.267: 96.3151% ( 17) 00:10:23.347 4.267 - 4.290: 96.4462% ( 17) 00:10:23.347 4.290 - 4.314: 96.5541% ( 14) 00:10:23.347 4.314 - 4.338: 96.6775% ( 16) 00:10:23.347 4.338 - 4.361: 96.8162% ( 18) 00:10:23.347 4.361 - 4.385: 96.9010% ( 11) 00:10:23.347 4.385 - 4.409: 96.9935% ( 12) 00:10:23.347 4.409 - 4.433: 97.0475% ( 7) 00:10:23.347 4.433 - 4.456: 97.1169% ( 9) 00:10:23.347 4.456 - 4.480: 97.1477% ( 4) 00:10:23.347 4.480 - 4.504: 97.1631% ( 2) 00:10:23.347 4.504 - 4.527: 97.1785% ( 2) 00:10:23.347 4.551 - 4.575: 97.1862% ( 1) 00:10:23.347 4.575 - 4.599: 97.2094% ( 3) 00:10:23.347 4.622 - 4.646: 97.2248% ( 2) 00:10:23.347 4.670 - 4.693: 97.2325% ( 1) 00:10:23.347 4.693 - 4.717: 97.2402% ( 1) 00:10:23.347 4.741 - 4.764: 97.2556% ( 2) 00:10:23.347 4.764 - 4.788: 97.2710% ( 2) 00:10:23.347 4.788 - 4.812: 97.2865% ( 2) 00:10:23.347 4.812 - 4.836: 97.3250% ( 5) 00:10:23.347 4.836 - 4.859: 97.3481% ( 3) 00:10:23.347 4.859 - 4.883: 97.3944% ( 6) 00:10:23.347 4.883 - 4.907: 97.4252% ( 4) 00:10:23.347 4.907 - 4.930: 97.4792% ( 7) 00:10:23.347 4.930 - 4.954: 97.5023% ( 3) 00:10:23.347 4.954 - 4.978: 97.5254% ( 3) 00:10:23.347 4.978 - 5.001: 97.5794% ( 7) 00:10:23.347 5.001 - 5.025: 97.6025% ( 3) 00:10:23.347 5.025 - 5.049: 97.6411% ( 5) 00:10:23.347 5.049 - 5.073: 97.6642% ( 3) 00:10:23.347 5.073 - 5.096: 97.7105% ( 6) 00:10:23.347 5.096 - 5.120: 97.7259% ( 2) 00:10:23.347 5.120 - 5.144: 97.7490% ( 3) 00:10:23.347 5.144 - 5.167: 97.8107% ( 8) 00:10:23.347 5.167 - 5.191: 97.8492% ( 5) 00:10:23.347 5.191 - 5.215: 97.8646% ( 2) 00:10:23.347 5.215 - 5.239: 97.9109% ( 6) 00:10:23.347 5.239 - 5.262: 97.9186% ( 1) 00:10:23.347 5.262 - 5.286: 97.9340% ( 2) 00:10:23.347 5.286 - 5.310: 97.9571% ( 3) 00:10:23.347 5.310 - 5.333: 97.9803% ( 3) 00:10:23.347 5.333 - 5.357: 97.9957% ( 2) 00:10:23.347 5.357 - 5.381: 98.0265% ( 4) 00:10:23.347 5.381 - 5.404: 98.0419% ( 2) 00:10:23.347 5.404 - 5.428: 98.0496% ( 1) 00:10:23.347 5.428 - 5.452: 98.0651% ( 2) 00:10:23.347 5.452 - 5.476: 98.0805% ( 2) 00:10:23.347 5.476 - 5.499: 98.0959% ( 2) 00:10:23.347 5.641 - 5.665: 98.1036% ( 1) 00:10:23.347 5.689 - 5.713: 98.1113% ( 1) 00:10:23.347 5.713 - 5.736: 98.1190% ( 1) 00:10:23.347 5.736 - 5.760: 98.1267% ( 1) 00:10:23.347 5.902 - 5.926: 98.1344% ( 1) 00:10:23.347 5.997 - 6.021: 98.1422% ( 1) 00:10:23.347 6.021 - 6.044: 98.1499% ( 1) 00:10:23.347 6.068 - 6.116: 98.1576% ( 1) 00:10:23.347 6.210 - 6.258: 98.1653% ( 1) 00:10:23.347 6.258 - 6.305: 98.1730% ( 1) 00:10:23.347 6.305 - 6.353: 98.1884% ( 2) 00:10:23.347 6.495 - 6.542: 98.1961% ( 1) 00:10:23.347 6.590 - 6.637: 98.2038% ( 1) 00:10:23.347 6.637 - 6.684: 98.2192% ( 2) 00:10:23.347 6.732 - 6.779: 98.2424% ( 3) 00:10:23.347 6.921 - 6.969: 98.2501% ( 1) 00:10:23.347 6.969 - 7.016: 98.2578% ( 1) 00:10:23.347 7.016 - 7.064: 98.2732% ( 2) 00:10:23.347 7.064 - 7.111: 98.2886% ( 2) 00:10:23.347 7.111 - 7.159: 98.3040% ( 2) 00:10:23.347 7.206 - 7.253: 98.3117% ( 1) 00:10:23.347 7.253 - 7.301: 98.3195% ( 1) 00:10:23.347 7.348 - 7.396: 98.3272% ( 1) 00:10:23.347 7.396 - 7.443: 98.3349% ( 1) 00:10:23.347 7.538 - 7.585: 98.3426% ( 1) 00:10:23.347 7.585 - 7.633: 98.3503% ( 1) 00:10:23.347 7.633 - 7.680: 98.3657% ( 2) 00:10:23.347 7.680 - 7.727: 98.3734% ( 1) 00:10:23.347 7.727 - 7.775: 98.3811% ( 1) 00:10:23.347 7.775 - 7.822: 98.3888% ( 1) 00:10:23.347 7.822 - 7.870: 98.4043% ( 2) 00:10:23.347 7.870 - 7.917: 98.4197% ( 2) 00:10:23.347 7.917 - 7.964: 98.4351% ( 2) 00:10:23.347 7.964 - 8.012: 98.4428% ( 1) 00:10:23.347 8.012 - 8.059: 98.4505% ( 1) 00:10:23.347 8.059 - 8.107: 98.4582% ( 1) 00:10:23.347 8.107 - 8.154: 98.4659% ( 1) 00:10:23.347 8.154 - 8.201: 98.4813% ( 2) 00:10:23.347 8.201 - 8.249: 98.4891% ( 1) 00:10:23.347 8.391 - 8.439: 98.4968% ( 1) 00:10:23.347 8.486 - 8.533: 98.5122% ( 2) 00:10:23.347 8.628 - 8.676: 98.5199% ( 1) 00:10:23.347 8.676 - 8.723: 98.5276% ( 1) 00:10:23.347 8.723 - 8.770: 98.5353% ( 1) 00:10:23.347 8.770 - 8.818: 98.5584% ( 3) 00:10:23.347 9.434 - 9.481: 98.5661% ( 1) 00:10:23.347 9.481 - 9.529: 98.5739% ( 1) 00:10:23.347 9.719 - 9.766: 98.5816% ( 1) 00:10:23.347 9.766 - 9.813: 98.5893% ( 1) 00:10:23.347 9.813 - 9.861: 98.5970% ( 1) 00:10:23.347 9.861 - 9.908: 98.6047% ( 1) 00:10:23.347 9.908 - 9.956: 98.6124% ( 1) 00:10:23.347 10.145 - 10.193: 98.6201% ( 1) 00:10:23.347 10.193 - 10.240: 98.6278% ( 1) 00:10:23.347 10.240 - 10.287: 98.6432% ( 2) 00:10:23.347 10.335 - 10.382: 98.6509% ( 1) 00:10:23.347 10.382 - 10.430: 98.6586% ( 1) 00:10:23.347 10.477 - 10.524: 98.6664% ( 1) 00:10:23.347 10.619 - 10.667: 98.6818% ( 2) 00:10:23.347 10.714 - 10.761: 98.6895% ( 1) 00:10:23.347 10.856 - 10.904: 98.6972% ( 1) 00:10:23.347 10.904 - 10.951: 98.7049% ( 1) 00:10:23.347 11.046 - 11.093: 98.7203% ( 2) 00:10:23.347 11.283 - 11.330: 98.7280% ( 1) 00:10:23.347 11.330 - 11.378: 98.7357% ( 1) 00:10:23.347 11.473 - 11.520: 98.7434% ( 1) 00:10:23.347 11.567 - 11.615: 98.7512% ( 1) 00:10:23.347 11.757 - 11.804: 98.7589% ( 1) 00:10:23.347 11.804 - 11.852: 98.7666% ( 1) 00:10:23.347 11.899 - 11.947: 98.7743% ( 1) 00:10:23.347 12.326 - 12.421: 98.7974% ( 3) 00:10:23.347 12.421 - 12.516: 98.8051% ( 1) 00:10:23.347 12.516 - 12.610: 98.8128% ( 1) 00:10:23.347 12.610 - 12.705: 98.8282% ( 2) 00:10:23.347 12.705 - 12.800: 98.8360% ( 1) 00:10:23.347 12.895 - 12.990: 98.8514% ( 2) 00:10:23.347 13.274 - 13.369: 98.8591% ( 1) 00:10:23.347 13.369 - 13.464: 98.8668% ( 1) 00:10:23.347 13.653 - 13.748: 98.8822% ( 2) 00:10:23.347 13.938 - 14.033: 98.8899% ( 1) 00:10:23.347 14.033 - 14.127: 98.8976% ( 1) 00:10:23.347 14.127 - 14.222: 98.9053% ( 1) 00:10:23.347 14.222 - 14.317: 98.9285% ( 3) 00:10:23.347 14.507 - 14.601: 98.9362% ( 1) 00:10:23.347 14.601 - 14.696: 98.9439% ( 1) 00:10:23.347 14.696 - 14.791: 98.9593% ( 2) 00:10:23.347 14.886 - 14.981: 98.9670% ( 1) 00:10:23.347 14.981 - 15.076: 98.9747% ( 1) 00:10:23.347 15.265 - 15.360: 98.9824% ( 1) 00:10:23.347 15.550 - 15.644: 98.9901% ( 1) 00:10:23.347 17.067 - 17.161: 98.9978% ( 1) 00:10:23.347 17.256 - 17.351: 99.0133% ( 2) 00:10:23.347 17.351 - 17.446: 99.0210% ( 1) 00:10:23.347 17.446 - 17.541: 99.0287% ( 1) 00:10:23.347 17.541 - 17.636: 99.0441% ( 2) 00:10:23.347 17.636 - 17.730: 99.1135% ( 9) 00:10:23.347 17.730 - 17.825: 99.1751% ( 8) 00:10:23.347 17.825 - 17.920: 99.2214% ( 6) 00:10:23.347 17.920 - 18.015: 99.3139% ( 12) 00:10:23.347 18.015 - 18.110: 99.3447% ( 4) 00:10:23.347 18.110 - 18.204: 99.3987% ( 7) 00:10:23.347 18.204 - 18.299: 99.4527% ( 7) 00:10:23.347 18.299 - 18.394: 99.4681% ( 2) 00:10:23.347 18.394 - 18.489: 99.4912% ( 3) 00:10:23.347 18.489 - 18.584: 99.5683% ( 10) 00:10:23.347 18.584 - 18.679: 99.5914% ( 3) 00:10:23.347 18.679 - 18.773: 99.6146% ( 3) 00:10:23.347 18.773 - 18.868: 99.6300% ( 2) 00:10:23.347 18.868 - 18.963: 99.6916% ( 8) 00:10:23.347 18.963 - 19.058: 99.7148% ( 3) 00:10:23.347 19.058 - 19.153: 99.7225% ( 1) 00:10:23.347 19.153 - 19.247: 99.7456% ( 3) 00:10:23.347 19.247 - 19.342: 99.7533% ( 1) 00:10:23.347 19.342 - 19.437: 99.7687% ( 2) 00:10:23.347 19.437 - 19.532: 99.7842% ( 2) 00:10:23.347 19.721 - 19.816: 99.7919% ( 1) 00:10:23.347 19.816 - 19.911: 99.7996% ( 1) 00:10:23.347 19.911 - 20.006: 99.8073% ( 1) 00:10:23.347 22.281 - 22.376: 99.8227% ( 2) 00:10:23.347 23.609 - 23.704: 99.8304% ( 1) 00:10:23.347 24.178 - 24.273: 99.8381% ( 1) 00:10:23.347 24.652 - 24.841: 99.8458% ( 1) 00:10:23.347 27.496 - 27.686: 99.8535% ( 1) 00:10:23.347 29.772 - 29.961: 99.8612% ( 1) 00:10:23.347 36.788 - 36.978: 99.8689% ( 1) 00:10:23.347 3980.705 - 4004.978: 99.9537% ( 11) 00:10:23.348 4004.978 - 4029.250: 100.0000% ( 6) 00:10:23.348 00:10:23.348 Complete histogram 00:10:23.348 ================== 00:10:23.348 Range in us Cumulative Count 00:10:23.348 2.074 - 2.086: 5.7817% ( 750) 00:10:23.348 2.086 - 2.098: 29.1551% ( 3032) 00:10:23.348 2.098 - 2.110: 34.1351% ( 646) 00:10:23.348 2.110 - 2.121: 46.6235% ( 1620) 00:10:23.348 2.121 - 2.133: 58.3642% ( 1523) 00:10:23.348 2.133 - 2.145: 60.2914% ( 250) 00:10:23.348 2.145 - 2.157: 65.9420% ( 733) 00:10:23.348 2.157 - 2.169: 73.0265% ( 919) 00:10:23.348 2.169 - 2.181: 74.4758% ( 188) 00:10:23.348 2.181 - 2.193: 79.5637% ( 660) 00:10:23.348 2.193 - 2.204: 83.5492% ( 517) 00:10:23.348 2.204 - 2.216: 84.4588% ( 118) 00:10:23.348 2.216 - 2.228: 87.0259% ( 333) 00:10:23.348 2.228 - 2.240: 90.2559% ( 419) 00:10:23.348 2.240 - 2.252: 91.0962% ( 109) 00:10:23.348 2.252 - 2.264: 92.3990% ( 169) 00:10:23.348 2.264 - 2.276: 93.4628% ( 138) 00:10:23.348 2.276 - 2.287: 93.9254% ( 60) 00:10:23.348 2.287 - 2.299: 94.3494% ( 55) 00:10:23.348 2.299 - 2.311: 94.8967% ( 71) 00:10:23.348 2.311 - 2.323: 95.0740% ( 23) 00:10:23.348 2.323 - 2.335: 95.1819% ( 14) 00:10:23.348 2.335 - 2.347: 95.2513% ( 9) 00:10:23.348 2.347 - 2.359: 95.3207% ( 9) 00:10:23.348 2.359 - 2.370: 95.4286% ( 14) 00:10:23.348 2.370 - 2.382: 95.8295% ( 52) 00:10:23.348 2.382 - 2.394: 96.1147% ( 37) 00:10:23.348 2.394 - 2.406: 96.4385% ( 42) 00:10:23.348 2.406 - 2.418: 96.7314% ( 38) 00:10:23.348 2.418 - 2.430: 97.1092% ( 49) 00:10:23.348 2.430 - 2.441: 97.4098% ( 39) 00:10:23.348 2.441 - 2.453: 97.5794% ( 22) 00:10:23.348 2.453 - 2.465: 97.7644% ( 24) 00:10:23.348 2.465 - 2.477: 97.8800% ( 15) 00:10:23.348 2.477 - 2.489: 98.0265% ( 19) 00:10:23.348 2.489 - 2.501: 98.0496% ( 3) 00:10:23.348 2.501 - 2.513: 98.1190% ( 9) 00:10:23.348 2.513 - 2.524: 98.1576% ( 5) 00:10:23.348 2.524 - 2.536: 98.1884% ( 4) 00:10:23.348 2.536 - 2.548: 98.2347% ( 6) 00:10:23.348 2.548 - 2.560: 98.2501% ( 2) 00:10:23.348 2.560 - 2.572: 98.2578% ( 1) 00:10:23.348 2.572 - 2.584: 98.2732% ( 2) 00:10:23.348 2.584 - 2.596: 98.3040% ( 4) 00:10:23.348 2.596 - 2.607: 9[2024-05-15 04:11:11.328748] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:23.606 8.3272% ( 3) 00:10:23.606 2.702 - 2.714: 98.3349% ( 1) 00:10:23.606 2.714 - 2.726: 98.3426% ( 1) 00:10:23.606 2.738 - 2.750: 98.3503% ( 1) 00:10:23.606 2.750 - 2.761: 98.3580% ( 1) 00:10:23.606 2.821 - 2.833: 98.3657% ( 1) 00:10:23.606 2.833 - 2.844: 98.3734% ( 1) 00:10:23.606 2.999 - 3.010: 98.3811% ( 1) 00:10:23.606 3.224 - 3.247: 98.3888% ( 1) 00:10:23.606 3.319 - 3.342: 98.3965% ( 1) 00:10:23.606 3.342 - 3.366: 98.4043% ( 1) 00:10:23.606 3.366 - 3.390: 98.4120% ( 1) 00:10:23.606 3.390 - 3.413: 98.4274% ( 2) 00:10:23.606 3.413 - 3.437: 98.4428% ( 2) 00:10:23.606 3.461 - 3.484: 98.4659% ( 3) 00:10:23.606 3.484 - 3.508: 98.4736% ( 1) 00:10:23.606 3.579 - 3.603: 98.4891% ( 2) 00:10:23.606 3.627 - 3.650: 98.4968% ( 1) 00:10:23.606 3.650 - 3.674: 98.5045% ( 1) 00:10:23.606 3.698 - 3.721: 98.5199% ( 2) 00:10:23.606 3.745 - 3.769: 98.5276% ( 1) 00:10:23.606 3.793 - 3.816: 98.5353% ( 1) 00:10:23.606 3.816 - 3.840: 98.5430% ( 1) 00:10:23.606 4.030 - 4.053: 98.5584% ( 2) 00:10:23.606 4.764 - 4.788: 98.5661% ( 1) 00:10:23.606 5.025 - 5.049: 98.5739% ( 1) 00:10:23.606 5.262 - 5.286: 98.5816% ( 1) 00:10:23.606 5.310 - 5.333: 98.5893% ( 1) 00:10:23.606 5.452 - 5.476: 98.5970% ( 1) 00:10:23.606 5.547 - 5.570: 98.6047% ( 1) 00:10:23.606 5.618 - 5.641: 98.6124% ( 1) 00:10:23.606 5.760 - 5.784: 98.6278% ( 2) 00:10:23.606 5.807 - 5.831: 98.6355% ( 1) 00:10:23.606 5.855 - 5.879: 98.6432% ( 1) 00:10:23.606 5.879 - 5.902: 98.6509% ( 1) 00:10:23.606 6.021 - 6.044: 98.6586% ( 1) 00:10:23.606 6.116 - 6.163: 98.6664% ( 1) 00:10:23.606 6.210 - 6.258: 98.6818% ( 2) 00:10:23.606 6.305 - 6.353: 98.6895% ( 1) 00:10:23.606 6.400 - 6.447: 98.6972% ( 1) 00:10:23.606 6.590 - 6.637: 98.7049% ( 1) 00:10:23.606 8.770 - 8.818: 98.7126% ( 1) 00:10:23.606 10.193 - 10.240: 98.7203% ( 1) 00:10:23.606 11.093 - 11.141: 98.7280% ( 1) 00:10:23.606 11.141 - 11.188: 98.7357% ( 1) 00:10:23.606 15.360 - 15.455: 98.7434% ( 1) 00:10:23.606 15.455 - 15.550: 98.7589% ( 2) 00:10:23.606 15.739 - 15.834: 98.7743% ( 2) 00:10:23.606 15.834 - 15.929: 98.7974% ( 3) 00:10:23.606 15.929 - 16.024: 98.8205% ( 3) 00:10:23.606 16.024 - 16.119: 98.8591% ( 5) 00:10:23.606 16.119 - 16.213: 98.9285% ( 9) 00:10:23.606 16.213 - 16.308: 98.9824% ( 7) 00:10:23.606 16.308 - 16.403: 99.0364% ( 7) 00:10:23.606 16.403 - 16.498: 99.0749% ( 5) 00:10:23.606 16.498 - 16.593: 99.1212% ( 6) 00:10:23.606 16.593 - 16.687: 99.1829% ( 8) 00:10:23.606 16.687 - 16.782: 99.2060% ( 3) 00:10:23.606 16.782 - 16.877: 99.2599% ( 7) 00:10:23.606 16.877 - 16.972: 99.2677% ( 1) 00:10:23.606 16.972 - 17.067: 99.2831% ( 2) 00:10:23.606 17.067 - 17.161: 99.2985% ( 2) 00:10:23.606 17.161 - 17.256: 99.3216% ( 3) 00:10:23.606 17.351 - 17.446: 99.3370% ( 2) 00:10:23.606 17.541 - 17.636: 99.3447% ( 1) 00:10:23.606 17.730 - 17.825: 99.3525% ( 1) 00:10:23.606 17.825 - 17.920: 99.3602% ( 1) 00:10:23.606 17.920 - 18.015: 99.3679% ( 1) 00:10:23.606 18.110 - 18.204: 99.3756% ( 1) 00:10:23.606 18.299 - 18.394: 99.3833% ( 1) 00:10:23.606 18.679 - 18.773: 99.3910% ( 1) 00:10:23.606 19.058 - 19.153: 99.3987% ( 1) 00:10:23.606 19.911 - 20.006: 99.4064% ( 1) 00:10:23.606 23.609 - 23.704: 99.4141% ( 1) 00:10:23.606 26.359 - 26.548: 99.4218% ( 1) 00:10:23.606 2390.850 - 2402.987: 99.4295% ( 1) 00:10:23.606 3980.705 - 4004.978: 99.7687% ( 44) 00:10:23.606 4004.978 - 4029.250: 100.0000% ( 30) 00:10:23.606 00:10:23.606 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:23.606 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:23.606 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:23.606 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:23.606 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:23.606 [ 00:10:23.606 { 00:10:23.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:23.606 "subtype": "Discovery", 00:10:23.607 "listen_addresses": [], 00:10:23.607 "allow_any_host": true, 00:10:23.607 "hosts": [] 00:10:23.607 }, 00:10:23.607 { 00:10:23.607 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:23.607 "subtype": "NVMe", 00:10:23.607 "listen_addresses": [ 00:10:23.607 { 00:10:23.607 "trtype": "VFIOUSER", 00:10:23.607 "adrfam": "IPv4", 00:10:23.607 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:23.607 "trsvcid": "0" 00:10:23.607 } 00:10:23.607 ], 00:10:23.607 "allow_any_host": true, 00:10:23.607 "hosts": [], 00:10:23.607 "serial_number": "SPDK1", 00:10:23.607 "model_number": "SPDK bdev Controller", 00:10:23.607 "max_namespaces": 32, 00:10:23.607 "min_cntlid": 1, 00:10:23.607 "max_cntlid": 65519, 00:10:23.607 "namespaces": [ 00:10:23.607 { 00:10:23.607 "nsid": 1, 00:10:23.607 "bdev_name": "Malloc1", 00:10:23.607 "name": "Malloc1", 00:10:23.607 "nguid": "A1030108ADC641CDB0D743E69464191C", 00:10:23.607 "uuid": "a1030108-adc6-41cd-b0d7-43e69464191c" 00:10:23.607 }, 00:10:23.607 { 00:10:23.607 "nsid": 2, 00:10:23.607 "bdev_name": "Malloc3", 00:10:23.607 "name": "Malloc3", 00:10:23.607 "nguid": "9BF415037A36480DB856E2E455E0957D", 00:10:23.607 "uuid": "9bf41503-7a36-480d-b856-e2e455e0957d" 00:10:23.607 } 00:10:23.607 ] 00:10:23.607 }, 00:10:23.607 { 00:10:23.607 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:23.607 "subtype": "NVMe", 00:10:23.607 "listen_addresses": [ 00:10:23.607 { 00:10:23.607 "trtype": "VFIOUSER", 00:10:23.607 "adrfam": "IPv4", 00:10:23.607 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:23.607 "trsvcid": "0" 00:10:23.607 } 00:10:23.607 ], 00:10:23.607 "allow_any_host": true, 00:10:23.607 "hosts": [], 00:10:23.607 "serial_number": "SPDK2", 00:10:23.607 "model_number": "SPDK bdev Controller", 00:10:23.607 "max_namespaces": 32, 00:10:23.607 "min_cntlid": 1, 00:10:23.607 "max_cntlid": 65519, 00:10:23.607 "namespaces": [ 00:10:23.607 { 00:10:23.607 "nsid": 1, 00:10:23.607 "bdev_name": "Malloc2", 00:10:23.607 "name": "Malloc2", 00:10:23.607 "nguid": "A2DD282EADB04DE48DA9054701E6C318", 00:10:23.607 "uuid": "a2dd282e-adb0-4de4-8da9-054701e6c318" 00:10:23.607 } 00:10:23.607 ] 00:10:23.607 } 00:10:23.607 ] 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3322552 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:23.892 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.892 [2024-05-15 04:11:11.793455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:23.892 Malloc4 00:10:23.892 04:11:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:24.149 [2024-05-15 04:11:12.124084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:24.149 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:24.406 Asynchronous Event Request test 00:10:24.406 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.406 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.406 Registering asynchronous event callbacks... 00:10:24.406 Starting namespace attribute notice tests for all controllers... 00:10:24.406 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:24.406 aer_cb - Changed Namespace 00:10:24.406 Cleaning up... 00:10:24.406 [ 00:10:24.406 { 00:10:24.406 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:24.406 "subtype": "Discovery", 00:10:24.406 "listen_addresses": [], 00:10:24.406 "allow_any_host": true, 00:10:24.406 "hosts": [] 00:10:24.406 }, 00:10:24.406 { 00:10:24.406 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:24.406 "subtype": "NVMe", 00:10:24.406 "listen_addresses": [ 00:10:24.406 { 00:10:24.406 "trtype": "VFIOUSER", 00:10:24.406 "adrfam": "IPv4", 00:10:24.406 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:24.406 "trsvcid": "0" 00:10:24.406 } 00:10:24.406 ], 00:10:24.406 "allow_any_host": true, 00:10:24.406 "hosts": [], 00:10:24.406 "serial_number": "SPDK1", 00:10:24.406 "model_number": "SPDK bdev Controller", 00:10:24.406 "max_namespaces": 32, 00:10:24.406 "min_cntlid": 1, 00:10:24.406 "max_cntlid": 65519, 00:10:24.406 "namespaces": [ 00:10:24.406 { 00:10:24.406 "nsid": 1, 00:10:24.406 "bdev_name": "Malloc1", 00:10:24.406 "name": "Malloc1", 00:10:24.406 "nguid": "A1030108ADC641CDB0D743E69464191C", 00:10:24.406 "uuid": "a1030108-adc6-41cd-b0d7-43e69464191c" 00:10:24.406 }, 00:10:24.406 { 00:10:24.406 "nsid": 2, 00:10:24.406 "bdev_name": "Malloc3", 00:10:24.406 "name": "Malloc3", 00:10:24.406 "nguid": "9BF415037A36480DB856E2E455E0957D", 00:10:24.406 "uuid": "9bf41503-7a36-480d-b856-e2e455e0957d" 00:10:24.406 } 00:10:24.406 ] 00:10:24.406 }, 00:10:24.406 { 00:10:24.406 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:24.406 "subtype": "NVMe", 00:10:24.406 "listen_addresses": [ 00:10:24.406 { 00:10:24.406 "trtype": "VFIOUSER", 00:10:24.406 "adrfam": "IPv4", 00:10:24.406 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:24.406 "trsvcid": "0" 00:10:24.406 } 00:10:24.406 ], 00:10:24.406 "allow_any_host": true, 00:10:24.406 "hosts": [], 00:10:24.406 "serial_number": "SPDK2", 00:10:24.406 "model_number": "SPDK bdev Controller", 00:10:24.406 "max_namespaces": 32, 00:10:24.406 "min_cntlid": 1, 00:10:24.406 "max_cntlid": 65519, 00:10:24.406 "namespaces": [ 00:10:24.406 { 00:10:24.406 "nsid": 1, 00:10:24.406 "bdev_name": "Malloc2", 00:10:24.406 "name": "Malloc2", 00:10:24.406 "nguid": "A2DD282EADB04DE48DA9054701E6C318", 00:10:24.406 "uuid": "a2dd282e-adb0-4de4-8da9-054701e6c318" 00:10:24.406 }, 00:10:24.406 { 00:10:24.406 "nsid": 2, 00:10:24.406 "bdev_name": "Malloc4", 00:10:24.406 "name": "Malloc4", 00:10:24.406 "nguid": "1A0C22188E71429F8FCC64915F4F56B5", 00:10:24.406 "uuid": "1a0c2218-8e71-429f-8fcc-64915f4f56b5" 00:10:24.406 } 00:10:24.406 ] 00:10:24.406 } 00:10:24.406 ] 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3322552 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3316949 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3316949 ']' 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3316949 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3316949 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3316949' 00:10:24.406 killing process with pid 3316949 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3316949 00:10:24.406 [2024-05-15 04:11:12.410338] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:24.406 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3316949 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3322692 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3322692' 00:10:24.974 Process pid: 3322692 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3322692 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3322692 ']' 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:24.974 04:11:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:24.974 [2024-05-15 04:11:12.838960] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:24.974 [2024-05-15 04:11:12.840031] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:24.974 [2024-05-15 04:11:12.840095] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.974 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.974 [2024-05-15 04:11:12.914513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.234 [2024-05-15 04:11:13.031910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.234 [2024-05-15 04:11:13.031982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.234 [2024-05-15 04:11:13.031998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.234 [2024-05-15 04:11:13.032011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.234 [2024-05-15 04:11:13.032023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.234 [2024-05-15 04:11:13.032129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.234 [2024-05-15 04:11:13.032183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.234 [2024-05-15 04:11:13.032252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.234 [2024-05-15 04:11:13.032254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.234 [2024-05-15 04:11:13.132638] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:25.234 [2024-05-15 04:11:13.132889] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:25.234 [2024-05-15 04:11:13.133187] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:25.234 [2024-05-15 04:11:13.133828] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:25.235 [2024-05-15 04:11:13.134118] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:26.169 04:11:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:26.169 04:11:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:10:26.169 04:11:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:27.102 04:11:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:27.361 04:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:27.361 04:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:27.361 04:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:27.361 04:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:27.361 04:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:27.620 Malloc1 00:10:27.620 04:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:27.879 04:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:28.137 04:11:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:28.395 [2024-05-15 04:11:16.316832] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:28.395 04:11:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:28.395 04:11:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:28.395 04:11:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:28.654 Malloc2 00:10:28.654 04:11:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:28.912 04:11:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:29.170 04:11:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3322692 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3322692 ']' 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3322692 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3322692 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:29.428 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3322692' 00:10:29.429 killing process with pid 3322692 00:10:29.429 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3322692 00:10:29.429 [2024-05-15 04:11:17.418272] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:29.429 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3322692 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:29.998 00:10:29.998 real 0m53.627s 00:10:29.998 user 3m30.786s 00:10:29.998 sys 0m4.922s 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:29.998 ************************************ 00:10:29.998 END TEST nvmf_vfio_user 00:10:29.998 ************************************ 00:10:29.998 04:11:17 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:29.998 04:11:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:29.998 04:11:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:29.998 04:11:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.998 ************************************ 00:10:29.998 START TEST nvmf_vfio_user_nvme_compliance 00:10:29.998 ************************************ 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:29.998 * Looking for test storage... 00:10:29.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3323309 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3323309' 00:10:29.998 Process pid: 3323309 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3323309 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3323309 ']' 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:29.998 04:11:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:29.998 [2024-05-15 04:11:17.925217] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:10:29.998 [2024-05-15 04:11:17.925297] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.998 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.998 [2024-05-15 04:11:17.994597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.257 [2024-05-15 04:11:18.105153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.257 [2024-05-15 04:11:18.105221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.257 [2024-05-15 04:11:18.105247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.257 [2024-05-15 04:11:18.105260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.257 [2024-05-15 04:11:18.105271] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.257 [2024-05-15 04:11:18.105327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.257 [2024-05-15 04:11:18.105395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.257 [2024-05-15 04:11:18.105391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.257 04:11:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:30.257 04:11:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:10:30.257 04:11:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.221 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.480 malloc0 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.480 [2024-05-15 04:11:19.287170] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.480 04:11:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:31.480 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.480 00:10:31.480 00:10:31.480 CUnit - A unit testing framework for C - Version 2.1-3 00:10:31.480 http://cunit.sourceforge.net/ 00:10:31.480 00:10:31.480 00:10:31.480 Suite: nvme_compliance 00:10:31.480 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 04:11:19.464713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.480 [2024-05-15 04:11:19.466258] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:31.480 [2024-05-15 04:11:19.466283] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:31.480 [2024-05-15 04:11:19.466310] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:31.480 [2024-05-15 04:11:19.467737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.738 passed 00:10:31.738 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 04:11:19.553376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.738 [2024-05-15 04:11:19.556405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.738 passed 00:10:31.738 Test: admin_identify_ns ...[2024-05-15 04:11:19.643607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.738 [2024-05-15 04:11:19.702963] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:31.738 [2024-05-15 04:11:19.710949] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:31.738 [2024-05-15 04:11:19.732056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.996 passed 00:10:31.996 Test: admin_get_features_mandatory_features ...[2024-05-15 04:11:19.816144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.996 [2024-05-15 04:11:19.819167] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.996 passed 00:10:31.996 Test: admin_get_features_optional_features ...[2024-05-15 04:11:19.903732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.996 [2024-05-15 04:11:19.908765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.996 passed 00:10:31.996 Test: admin_set_features_number_of_queues ...[2024-05-15 04:11:19.991512] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.254 [2024-05-15 04:11:20.097202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.254 passed 00:10:32.254 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 04:11:20.181954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.254 [2024-05-15 04:11:20.184985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.254 passed 00:10:32.512 Test: admin_get_log_page_with_lpo ...[2024-05-15 04:11:20.273104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.512 [2024-05-15 04:11:20.339948] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:32.512 [2024-05-15 04:11:20.353060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.512 passed 00:10:32.512 Test: fabric_property_get ...[2024-05-15 04:11:20.436037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.512 [2024-05-15 04:11:20.437312] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:32.512 [2024-05-15 04:11:20.439053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.512 passed 00:10:32.512 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 04:11:20.525583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.512 [2024-05-15 04:11:20.526858] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:32.770 [2024-05-15 04:11:20.528601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.770 passed 00:10:32.770 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 04:11:20.607467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.770 [2024-05-15 04:11:20.694952] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:32.770 [2024-05-15 04:11:20.710941] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:32.770 [2024-05-15 04:11:20.716079] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.770 passed 00:10:33.027 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 04:11:20.796707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.027 [2024-05-15 04:11:20.797986] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:33.027 [2024-05-15 04:11:20.801738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.027 passed 00:10:33.027 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 04:11:20.884770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.027 [2024-05-15 04:11:20.957955] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:33.027 [2024-05-15 04:11:20.983950] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:33.027 [2024-05-15 04:11:20.989074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.027 passed 00:10:33.285 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 04:11:21.072004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.285 [2024-05-15 04:11:21.073286] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:33.286 [2024-05-15 04:11:21.073336] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:33.286 [2024-05-15 04:11:21.075023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.286 passed 00:10:33.286 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 04:11:21.157579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.286 [2024-05-15 04:11:21.249968] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:33.286 [2024-05-15 04:11:21.257953] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:33.286 [2024-05-15 04:11:21.265969] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:33.286 [2024-05-15 04:11:21.273942] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:33.544 [2024-05-15 04:11:21.303055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.544 passed 00:10:33.544 Test: admin_create_io_sq_verify_pc ...[2024-05-15 04:11:21.386848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.544 [2024-05-15 04:11:21.403954] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:33.544 [2024-05-15 04:11:21.421252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.544 passed 00:10:33.544 Test: admin_create_io_qp_max_qps ...[2024-05-15 04:11:21.504822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.919 [2024-05-15 04:11:22.607972] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:35.180 [2024-05-15 04:11:23.011273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.180 passed 00:10:35.180 Test: admin_create_io_sq_shared_cq ...[2024-05-15 04:11:23.094473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.441 [2024-05-15 04:11:23.229956] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:35.441 [2024-05-15 04:11:23.267042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.441 passed 00:10:35.441 00:10:35.441 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.441 suites 1 1 n/a 0 0 00:10:35.441 tests 18 18 18 0 0 00:10:35.441 asserts 360 360 360 0 n/a 00:10:35.441 00:10:35.441 Elapsed time = 1.578 seconds 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3323309 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3323309 ']' 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3323309 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3323309 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3323309' 00:10:35.441 killing process with pid 3323309 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3323309 00:10:35.441 [2024-05-15 04:11:23.344250] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:35.441 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3323309 00:10:35.700 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:35.700 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:35.700 00:10:35.700 real 0m5.849s 00:10:35.700 user 0m16.314s 00:10:35.700 sys 0m0.565s 00:10:35.700 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:35.700 04:11:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:35.700 ************************************ 00:10:35.700 END TEST nvmf_vfio_user_nvme_compliance 00:10:35.700 ************************************ 00:10:35.700 04:11:23 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:35.700 04:11:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:35.700 04:11:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:35.700 04:11:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.700 ************************************ 00:10:35.700 START TEST nvmf_vfio_user_fuzz 00:10:35.700 ************************************ 00:10:35.700 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:35.959 * Looking for test storage... 00:10:35.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3324046 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3324046' 00:10:35.959 Process pid: 3324046 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3324046 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3324046 ']' 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:35.959 04:11:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:36.219 04:11:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:36.219 04:11:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:10:36.219 04:11:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.157 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.425 malloc0 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.425 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.426 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.426 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:37.426 04:11:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:09.539 Fuzzing completed. Shutting down the fuzz application 00:11:09.539 00:11:09.539 Dumping successful admin opcodes: 00:11:09.539 8, 9, 10, 24, 00:11:09.539 Dumping successful io opcodes: 00:11:09.539 0, 00:11:09.539 NS: 0x200003a1ef00 I/O qp, Total commands completed: 684250, total successful commands: 2665, random_seed: 1031468224 00:11:09.539 NS: 0x200003a1ef00 admin qp, Total commands completed: 87320, total successful commands: 697, random_seed: 325251712 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3324046 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3324046 ']' 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3324046 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3324046 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3324046' 00:11:09.539 killing process with pid 3324046 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3324046 00:11:09.539 04:11:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3324046 00:11:09.539 04:11:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:09.539 04:11:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:09.539 00:11:09.539 real 0m33.378s 00:11:09.539 user 0m35.404s 00:11:09.539 sys 0m26.313s 00:11:09.539 04:11:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:09.539 04:11:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.539 ************************************ 00:11:09.539 END TEST nvmf_vfio_user_fuzz 00:11:09.539 ************************************ 00:11:09.539 04:11:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:09.539 04:11:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:09.539 04:11:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:09.539 04:11:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.539 ************************************ 00:11:09.539 START TEST nvmf_host_management 00:11:09.539 ************************************ 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:09.539 * Looking for test storage... 00:11:09.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.539 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.540 04:11:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.111 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:12.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:12.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:12.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:12.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:11:12.112 00:11:12.112 --- 10.0.0.2 ping statistics --- 00:11:12.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.112 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:11:12.112 00:11:12.112 --- 10.0.0.1 ping statistics --- 00:11:12.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.112 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3329908 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3329908 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3329908 ']' 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.112 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:12.113 04:11:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.113 [2024-05-15 04:11:59.825684] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:12.113 [2024-05-15 04:11:59.825772] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.113 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.113 [2024-05-15 04:11:59.909784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.113 [2024-05-15 04:12:00.031268] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.113 [2024-05-15 04:12:00.031331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.113 [2024-05-15 04:12:00.031361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.113 [2024-05-15 04:12:00.031372] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.113 [2024-05-15 04:12:00.031382] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.113 [2024-05-15 04:12:00.031466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.113 [2024-05-15 04:12:00.031530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.113 [2024-05-15 04:12:00.031580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:12.113 [2024-05-15 04:12:00.031583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.051 [2024-05-15 04:12:00.811804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.051 Malloc0 00:11:13.051 [2024-05-15 04:12:00.872310] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:13.051 [2024-05-15 04:12:00.872635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3330150 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3330150 /var/tmp/bdevperf.sock 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3330150 ']' 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:13.051 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.052 { 00:11:13.052 "params": { 00:11:13.052 "name": "Nvme$subsystem", 00:11:13.052 "trtype": "$TEST_TRANSPORT", 00:11:13.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.052 "adrfam": "ipv4", 00:11:13.052 "trsvcid": "$NVMF_PORT", 00:11:13.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.052 "hdgst": ${hdgst:-false}, 00:11:13.052 "ddgst": ${ddgst:-false} 00:11:13.052 }, 00:11:13.052 "method": "bdev_nvme_attach_controller" 00:11:13.052 } 00:11:13.052 EOF 00:11:13.052 )") 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:13.052 04:12:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.052 "params": { 00:11:13.052 "name": "Nvme0", 00:11:13.052 "trtype": "tcp", 00:11:13.052 "traddr": "10.0.0.2", 00:11:13.052 "adrfam": "ipv4", 00:11:13.052 "trsvcid": "4420", 00:11:13.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:13.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:13.052 "hdgst": false, 00:11:13.052 "ddgst": false 00:11:13.052 }, 00:11:13.052 "method": "bdev_nvme_attach_controller" 00:11:13.052 }' 00:11:13.052 [2024-05-15 04:12:00.946716] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:13.052 [2024-05-15 04:12:00.946798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330150 ] 00:11:13.052 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.052 [2024-05-15 04:12:01.022596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.310 [2024-05-15 04:12:01.133936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.568 Running I/O for 10 seconds... 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.138 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.138 [2024-05-15 04:12:01.980221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170cab0 is same with the state(5) to be set 00:11:14.138 [2024-05-15 04:12:01.980630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.138 [2024-05-15 04:12:01.980674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.980695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.138 [2024-05-15 04:12:01.980710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.980724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.138 [2024-05-15 04:12:01.980738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.980755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.138 [2024-05-15 04:12:01.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.980782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde990 is same with the state(5) to be set 00:11:14.138 [2024-05-15 04:12:01.981058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.138 [2024-05-15 04:12:01.981847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.138 [2024-05-15 04:12:01.981862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.981876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.981892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.981906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.981921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.981944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.981961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.981983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.981998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.982972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.982989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.983003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.983018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:14.139 [2024-05-15 04:12:01.983032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.139 [2024-05-15 04:12:01.983114] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x210ff20 was disconnected and freed. reset controller. 00:11:14.139 [2024-05-15 04:12:01.984290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:14.139 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.139 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:14.139 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.139 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:14.140 task offset: 76416 on job bdev=Nvme0n1 fails 00:11:14.140 00:11:14.140 Latency(us) 00:11:14.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.140 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:14.140 Job: Nvme0n1 ended in about 0.63 seconds with error 00:11:14.140 Verification LBA range: start 0x0 length 0x400 00:11:14.140 Nvme0n1 : 0.63 920.54 57.53 102.28 0.00 61433.91 2560.76 53593.88 00:11:14.140 =================================================================================================================== 00:11:14.140 Total : 920.54 57.53 102.28 0.00 61433.91 2560.76 53593.88 00:11:14.140 [2024-05-15 04:12:01.986169] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:14.140 [2024-05-15 04:12:01.986203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cde990 (9): Bad file descriptor 00:11:14.140 04:12:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.140 04:12:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:14.140 [2024-05-15 04:12:02.000588] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3330150 00:11:15.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3330150) - No such process 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:15.078 { 00:11:15.078 "params": { 00:11:15.078 "name": "Nvme$subsystem", 00:11:15.078 "trtype": "$TEST_TRANSPORT", 00:11:15.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.078 "adrfam": "ipv4", 00:11:15.078 "trsvcid": "$NVMF_PORT", 00:11:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.078 "hdgst": ${hdgst:-false}, 00:11:15.078 "ddgst": ${ddgst:-false} 00:11:15.078 }, 00:11:15.078 "method": "bdev_nvme_attach_controller" 00:11:15.078 } 00:11:15.078 EOF 00:11:15.078 )") 00:11:15.078 04:12:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:15.078 04:12:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:15.078 04:12:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:15.079 04:12:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:15.079 "params": { 00:11:15.079 "name": "Nvme0", 00:11:15.079 "trtype": "tcp", 00:11:15.079 "traddr": "10.0.0.2", 00:11:15.079 "adrfam": "ipv4", 00:11:15.079 "trsvcid": "4420", 00:11:15.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:15.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:15.079 "hdgst": false, 00:11:15.079 "ddgst": false 00:11:15.079 }, 00:11:15.079 "method": "bdev_nvme_attach_controller" 00:11:15.079 }' 00:11:15.079 [2024-05-15 04:12:03.036936] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:15.079 [2024-05-15 04:12:03.037032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330480 ] 00:11:15.079 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.338 [2024-05-15 04:12:03.109483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.338 [2024-05-15 04:12:03.222379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.596 Running I/O for 1 seconds... 00:11:16.974 00:11:16.974 Latency(us) 00:11:16.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.974 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:16.974 Verification LBA range: start 0x0 length 0x400 00:11:16.974 Nvme0n1 : 1.01 1137.38 71.09 0.00 0.00 55483.40 12524.66 44661.57 00:11:16.974 =================================================================================================================== 00:11:16.974 Total : 1137.38 71.09 0.00 0.00 55483.40 12524.66 44661.57 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.974 rmmod nvme_tcp 00:11:16.974 rmmod nvme_fabrics 00:11:16.974 rmmod nvme_keyring 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3329908 ']' 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3329908 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3329908 ']' 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3329908 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3329908 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3329908' 00:11:16.974 killing process with pid 3329908 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3329908 00:11:16.974 [2024-05-15 04:12:04.944422] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:16.974 04:12:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3329908 00:11:17.232 [2024-05-15 04:12:05.224655] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.491 04:12:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.396 04:12:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.396 04:12:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:19.396 00:11:19.396 real 0m10.160s 00:11:19.396 user 0m24.443s 00:11:19.396 sys 0m3.117s 00:11:19.396 04:12:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:19.396 04:12:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.396 ************************************ 00:11:19.396 END TEST nvmf_host_management 00:11:19.396 ************************************ 00:11:19.396 04:12:07 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:19.396 04:12:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:19.396 04:12:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.396 04:12:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.396 ************************************ 00:11:19.396 START TEST nvmf_lvol 00:11:19.396 ************************************ 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:19.396 * Looking for test storage... 00:11:19.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.396 04:12:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.654 04:12:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:22.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:22.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:22.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:22.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:22.186 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:22.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:11:22.187 00:11:22.187 --- 10.0.0.2 ping statistics --- 00:11:22.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.187 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:11:22.187 00:11:22.187 --- 10.0.0.1 ping statistics --- 00:11:22.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.187 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.187 04:12:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3333495 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3333495 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3333495 ']' 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:22.187 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:22.187 [2024-05-15 04:12:10.068619] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:22.187 [2024-05-15 04:12:10.068704] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.187 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.187 [2024-05-15 04:12:10.148353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:22.446 [2024-05-15 04:12:10.265873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.446 [2024-05-15 04:12:10.265938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.446 [2024-05-15 04:12:10.265968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.446 [2024-05-15 04:12:10.265979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.446 [2024-05-15 04:12:10.265989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.446 [2024-05-15 04:12:10.266057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.446 [2024-05-15 04:12:10.266145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.446 [2024-05-15 04:12:10.266148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.446 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:22.446 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:11:22.446 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.446 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.446 04:12:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:22.446 04:12:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.446 04:12:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:22.704 [2024-05-15 04:12:10.679791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.704 04:12:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.272 04:12:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:23.272 04:12:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.272 04:12:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:23.272 04:12:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:23.838 04:12:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:23.838 04:12:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2a11164c-0d7c-436a-ae7b-d69af56446ab 00:11:23.838 04:12:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2a11164c-0d7c-436a-ae7b-d69af56446ab lvol 20 00:11:24.096 04:12:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d8c65f67-2713-4aef-a704-991e8a7e35f6 00:11:24.096 04:12:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:24.354 04:12:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d8c65f67-2713-4aef-a704-991e8a7e35f6 00:11:24.612 04:12:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:24.870 [2024-05-15 04:12:12.840889] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:24.870 [2024-05-15 04:12:12.841192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.870 04:12:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.127 04:12:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3333921 00:11:25.127 04:12:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:25.127 04:12:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:25.127 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.499 04:12:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d8c65f67-2713-4aef-a704-991e8a7e35f6 MY_SNAPSHOT 00:11:26.499 04:12:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bc10d0b8-5552-4679-951d-469167903458 00:11:26.500 04:12:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d8c65f67-2713-4aef-a704-991e8a7e35f6 30 00:11:26.800 04:12:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bc10d0b8-5552-4679-951d-469167903458 MY_CLONE 00:11:27.077 04:12:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7f6594df-9dd3-4566-9906-ca406cc80394 00:11:27.077 04:12:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7f6594df-9dd3-4566-9906-ca406cc80394 00:11:27.643 04:12:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3333921 00:11:35.751 Initializing NVMe Controllers 00:11:35.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:35.751 Controller IO queue size 128, less than required. 00:11:35.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:35.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:35.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:35.751 Initialization complete. Launching workers. 00:11:35.751 ======================================================== 00:11:35.751 Latency(us) 00:11:35.751 Device Information : IOPS MiB/s Average min max 00:11:35.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10316.90 40.30 12409.80 1372.13 80447.90 00:11:35.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10669.00 41.68 12002.71 2259.65 84459.65 00:11:35.751 ======================================================== 00:11:35.751 Total : 20985.90 81.98 12202.84 1372.13 84459.65 00:11:35.751 00:11:35.751 04:12:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:36.008 04:12:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d8c65f67-2713-4aef-a704-991e8a7e35f6 00:11:36.266 04:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a11164c-0d7c-436a-ae7b-d69af56446ab 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.524 rmmod nvme_tcp 00:11:36.524 rmmod nvme_fabrics 00:11:36.524 rmmod nvme_keyring 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3333495 ']' 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3333495 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3333495 ']' 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3333495 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3333495 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3333495' 00:11:36.524 killing process with pid 3333495 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3333495 00:11:36.524 [2024-05-15 04:12:24.412121] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:36.524 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3333495 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.782 04:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.316 00:11:39.316 real 0m19.435s 00:11:39.316 user 1m3.929s 00:11:39.316 sys 0m6.346s 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.316 ************************************ 00:11:39.316 END TEST nvmf_lvol 00:11:39.316 ************************************ 00:11:39.316 04:12:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:39.316 04:12:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:39.316 04:12:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:39.316 04:12:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.316 ************************************ 00:11:39.316 START TEST nvmf_lvs_grow 00:11:39.316 ************************************ 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:39.316 * Looking for test storage... 00:11:39.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.316 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.317 04:12:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:11:41.848 00:11:41.848 --- 10.0.0.2 ping statistics --- 00:11:41.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.848 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:41.848 00:11:41.848 --- 10.0.0.1 ping statistics --- 00:11:41.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.848 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:41.848 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3337470 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3337470 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3337470 ']' 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.849 04:12:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.849 [2024-05-15 04:12:29.534292] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:41.849 [2024-05-15 04:12:29.534372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.849 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.849 [2024-05-15 04:12:29.617938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.849 [2024-05-15 04:12:29.733796] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.849 [2024-05-15 04:12:29.733859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.849 [2024-05-15 04:12:29.733875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.849 [2024-05-15 04:12:29.733888] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.849 [2024-05-15 04:12:29.733900] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.849 [2024-05-15 04:12:29.733938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.782 04:12:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:42.782 04:12:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:11:42.782 04:12:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:42.782 04:12:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.782 04:12:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:42.782 04:12:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.782 04:12:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:43.041 [2024-05-15 04:12:30.831648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.041 ************************************ 00:11:43.041 START TEST lvs_grow_clean 00:11:43.041 ************************************ 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.041 04:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.300 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:43.300 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:43.558 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:43.558 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:43.558 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:43.816 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:43.816 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:43.816 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ef185fd-c4b2-4c26-b889-51f409addcea lvol 150 00:11:44.074 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c02478bf-15b6-4a86-a282-deb182e47055 00:11:44.074 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:44.074 04:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:44.331 [2024-05-15 04:12:32.125126] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:44.331 [2024-05-15 04:12:32.125235] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:44.331 true 00:11:44.332 04:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:44.332 04:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:44.590 04:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:44.590 04:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:44.849 04:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c02478bf-15b6-4a86-a282-deb182e47055 00:11:45.107 04:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:45.107 [2024-05-15 04:12:33.115926] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:45.107 [2024-05-15 04:12:33.116321] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3338035 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3338035 /var/tmp/bdevperf.sock 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3338035 ']' 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:45.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:45.365 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:45.635 [2024-05-15 04:12:33.415422] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:11:45.636 [2024-05-15 04:12:33.415505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338035 ] 00:11:45.636 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.636 [2024-05-15 04:12:33.489403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.636 [2024-05-15 04:12:33.606247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.896 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:45.896 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:11:45.896 04:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:46.153 Nvme0n1 00:11:46.153 04:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:46.719 [ 00:11:46.719 { 00:11:46.719 "name": "Nvme0n1", 00:11:46.719 "aliases": [ 00:11:46.719 "c02478bf-15b6-4a86-a282-deb182e47055" 00:11:46.719 ], 00:11:46.719 "product_name": "NVMe disk", 00:11:46.719 "block_size": 4096, 00:11:46.719 "num_blocks": 38912, 00:11:46.719 "uuid": "c02478bf-15b6-4a86-a282-deb182e47055", 00:11:46.719 "assigned_rate_limits": { 00:11:46.719 "rw_ios_per_sec": 0, 00:11:46.719 "rw_mbytes_per_sec": 0, 00:11:46.719 "r_mbytes_per_sec": 0, 00:11:46.719 "w_mbytes_per_sec": 0 00:11:46.719 }, 00:11:46.719 "claimed": false, 00:11:46.719 "zoned": false, 00:11:46.719 "supported_io_types": { 00:11:46.719 "read": true, 00:11:46.719 "write": true, 00:11:46.719 "unmap": true, 00:11:46.719 "write_zeroes": true, 00:11:46.719 "flush": true, 00:11:46.719 "reset": true, 00:11:46.719 "compare": true, 00:11:46.719 "compare_and_write": true, 00:11:46.719 "abort": true, 00:11:46.719 "nvme_admin": true, 00:11:46.719 "nvme_io": true 00:11:46.719 }, 00:11:46.719 "memory_domains": [ 00:11:46.719 { 00:11:46.719 "dma_device_id": "system", 00:11:46.719 "dma_device_type": 1 00:11:46.719 } 00:11:46.719 ], 00:11:46.719 "driver_specific": { 00:11:46.719 "nvme": [ 00:11:46.719 { 00:11:46.719 "trid": { 00:11:46.719 "trtype": "TCP", 00:11:46.719 "adrfam": "IPv4", 00:11:46.719 "traddr": "10.0.0.2", 00:11:46.719 "trsvcid": "4420", 00:11:46.719 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:46.719 }, 00:11:46.719 "ctrlr_data": { 00:11:46.719 "cntlid": 1, 00:11:46.719 "vendor_id": "0x8086", 00:11:46.719 "model_number": "SPDK bdev Controller", 00:11:46.719 "serial_number": "SPDK0", 00:11:46.719 "firmware_revision": "24.05", 00:11:46.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:46.719 "oacs": { 00:11:46.719 "security": 0, 00:11:46.719 "format": 0, 00:11:46.719 "firmware": 0, 00:11:46.719 "ns_manage": 0 00:11:46.719 }, 00:11:46.719 "multi_ctrlr": true, 00:11:46.719 "ana_reporting": false 00:11:46.719 }, 00:11:46.719 "vs": { 00:11:46.719 "nvme_version": "1.3" 00:11:46.719 }, 00:11:46.719 "ns_data": { 00:11:46.719 "id": 1, 00:11:46.719 "can_share": true 00:11:46.719 } 00:11:46.719 } 00:11:46.719 ], 00:11:46.719 "mp_policy": "active_passive" 00:11:46.719 } 00:11:46.719 } 00:11:46.719 ] 00:11:46.719 04:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3338171 00:11:46.719 04:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:46.719 04:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:46.719 Running I/O for 10 seconds... 00:11:47.687 Latency(us) 00:11:47.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.687 Nvme0n1 : 1.00 14159.00 55.31 0.00 0.00 0.00 0.00 0.00 00:11:47.687 =================================================================================================================== 00:11:47.687 Total : 14159.00 55.31 0.00 0.00 0.00 0.00 0.00 00:11:47.687 00:11:48.623 04:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:48.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.623 Nvme0n1 : 2.00 14287.50 55.81 0.00 0.00 0.00 0.00 0.00 00:11:48.623 =================================================================================================================== 00:11:48.623 Total : 14287.50 55.81 0.00 0.00 0.00 0.00 0.00 00:11:48.623 00:11:48.882 true 00:11:48.882 04:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:48.882 04:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:49.140 04:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:49.140 04:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:49.140 04:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3338171 00:11:49.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.707 Nvme0n1 : 3.00 14383.33 56.18 0.00 0.00 0.00 0.00 0.00 00:11:49.707 =================================================================================================================== 00:11:49.707 Total : 14383.33 56.18 0.00 0.00 0.00 0.00 0.00 00:11:49.707 00:11:50.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.651 Nvme0n1 : 4.00 14483.75 56.58 0.00 0.00 0.00 0.00 0.00 00:11:50.651 =================================================================================================================== 00:11:50.651 Total : 14483.75 56.58 0.00 0.00 0.00 0.00 0.00 00:11:50.651 00:11:51.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.586 Nvme0n1 : 5.00 14543.60 56.81 0.00 0.00 0.00 0.00 0.00 00:11:51.586 =================================================================================================================== 00:11:51.586 Total : 14543.60 56.81 0.00 0.00 0.00 0.00 0.00 00:11:51.586 00:11:52.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.961 Nvme0n1 : 6.00 14562.50 56.88 0.00 0.00 0.00 0.00 0.00 00:11:52.961 =================================================================================================================== 00:11:52.961 Total : 14562.50 56.88 0.00 0.00 0.00 0.00 0.00 00:11:52.961 00:11:53.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.894 Nvme0n1 : 7.00 14603.29 57.04 0.00 0.00 0.00 0.00 0.00 00:11:53.894 =================================================================================================================== 00:11:53.894 Total : 14603.29 57.04 0.00 0.00 0.00 0.00 0.00 00:11:53.894 00:11:54.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.828 Nvme0n1 : 8.00 14641.75 57.19 0.00 0.00 0.00 0.00 0.00 00:11:54.828 =================================================================================================================== 00:11:54.828 Total : 14641.75 57.19 0.00 0.00 0.00 0.00 0.00 00:11:54.828 00:11:55.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.762 Nvme0n1 : 9.00 14678.78 57.34 0.00 0.00 0.00 0.00 0.00 00:11:55.762 =================================================================================================================== 00:11:55.762 Total : 14678.78 57.34 0.00 0.00 0.00 0.00 0.00 00:11:55.762 00:11:56.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.699 Nvme0n1 : 10.00 14710.20 57.46 0.00 0.00 0.00 0.00 0.00 00:11:56.699 =================================================================================================================== 00:11:56.699 Total : 14710.20 57.46 0.00 0.00 0.00 0.00 0.00 00:11:56.699 00:11:56.699 00:11:56.699 Latency(us) 00:11:56.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.699 Nvme0n1 : 10.00 14714.46 57.48 0.00 0.00 8693.03 5655.51 16214.09 00:11:56.699 =================================================================================================================== 00:11:56.699 Total : 14714.46 57.48 0.00 0.00 8693.03 5655.51 16214.09 00:11:56.699 0 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3338035 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3338035 ']' 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3338035 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3338035 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3338035' 00:11:56.699 killing process with pid 3338035 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3338035 00:11:56.699 Received shutdown signal, test time was about 10.000000 seconds 00:11:56.699 00:11:56.699 Latency(us) 00:11:56.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.699 =================================================================================================================== 00:11:56.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:56.699 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3338035 00:11:56.957 04:12:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.214 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:57.472 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:57.472 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:57.730 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:57.730 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:57.730 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:57.989 [2024-05-15 04:12:45.905708] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:57.989 04:12:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:58.247 request: 00:11:58.247 { 00:11:58.247 "uuid": "5ef185fd-c4b2-4c26-b889-51f409addcea", 00:11:58.247 "method": "bdev_lvol_get_lvstores", 00:11:58.247 "req_id": 1 00:11:58.247 } 00:11:58.247 Got JSON-RPC error response 00:11:58.247 response: 00:11:58.247 { 00:11:58.247 "code": -19, 00:11:58.247 "message": "No such device" 00:11:58.247 } 00:11:58.247 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:58.247 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:58.247 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:58.247 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:58.247 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:58.505 aio_bdev 00:11:58.505 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c02478bf-15b6-4a86-a282-deb182e47055 00:11:58.505 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=c02478bf-15b6-4a86-a282-deb182e47055 00:11:58.505 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:58.505 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:11:58.505 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:58.505 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:58.505 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:58.764 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c02478bf-15b6-4a86-a282-deb182e47055 -t 2000 00:11:59.022 [ 00:11:59.022 { 00:11:59.022 "name": "c02478bf-15b6-4a86-a282-deb182e47055", 00:11:59.022 "aliases": [ 00:11:59.022 "lvs/lvol" 00:11:59.022 ], 00:11:59.022 "product_name": "Logical Volume", 00:11:59.022 "block_size": 4096, 00:11:59.022 "num_blocks": 38912, 00:11:59.022 "uuid": "c02478bf-15b6-4a86-a282-deb182e47055", 00:11:59.022 "assigned_rate_limits": { 00:11:59.022 "rw_ios_per_sec": 0, 00:11:59.022 "rw_mbytes_per_sec": 0, 00:11:59.022 "r_mbytes_per_sec": 0, 00:11:59.022 "w_mbytes_per_sec": 0 00:11:59.022 }, 00:11:59.022 "claimed": false, 00:11:59.022 "zoned": false, 00:11:59.022 "supported_io_types": { 00:11:59.022 "read": true, 00:11:59.022 "write": true, 00:11:59.022 "unmap": true, 00:11:59.022 "write_zeroes": true, 00:11:59.022 "flush": false, 00:11:59.022 "reset": true, 00:11:59.022 "compare": false, 00:11:59.022 "compare_and_write": false, 00:11:59.022 "abort": false, 00:11:59.022 "nvme_admin": false, 00:11:59.022 "nvme_io": false 00:11:59.022 }, 00:11:59.022 "driver_specific": { 00:11:59.022 "lvol": { 00:11:59.022 "lvol_store_uuid": "5ef185fd-c4b2-4c26-b889-51f409addcea", 00:11:59.022 "base_bdev": "aio_bdev", 00:11:59.022 "thin_provision": false, 00:11:59.022 "num_allocated_clusters": 38, 00:11:59.022 "snapshot": false, 00:11:59.022 "clone": false, 00:11:59.022 "esnap_clone": false 00:11:59.022 } 00:11:59.022 } 00:11:59.022 } 00:11:59.022 ] 00:11:59.022 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:11:59.022 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:59.022 04:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:59.281 04:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:59.281 04:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:11:59.281 04:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:59.539 04:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:59.539 04:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c02478bf-15b6-4a86-a282-deb182e47055 00:11:59.797 04:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ef185fd-c4b2-4c26-b889-51f409addcea 00:12:00.055 04:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:00.314 00:12:00.314 real 0m17.374s 00:12:00.314 user 0m16.900s 00:12:00.314 sys 0m1.893s 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:00.314 ************************************ 00:12:00.314 END TEST lvs_grow_clean 00:12:00.314 ************************************ 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:00.314 ************************************ 00:12:00.314 START TEST lvs_grow_dirty 00:12:00.314 ************************************ 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:00.314 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:00.882 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:00.882 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:00.882 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8be6b7cf-795e-4243-870a-bba7117455fa 00:12:00.882 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:00.882 04:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:01.140 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:01.140 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:01.140 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8be6b7cf-795e-4243-870a-bba7117455fa lvol 150 00:12:01.426 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2fa1403c-a12f-4bd8-932b-e884e58f5abe 00:12:01.426 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:01.426 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:01.684 [2024-05-15 04:12:49.597117] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:01.684 [2024-05-15 04:12:49.597227] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:01.684 true 00:12:01.684 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:01.684 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:01.943 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:01.943 04:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:02.201 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2fa1403c-a12f-4bd8-932b-e884e58f5abe 00:12:02.459 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:02.717 [2024-05-15 04:12:50.580112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.717 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3340088 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3340088 /var/tmp/bdevperf.sock 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3340088 ']' 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:02.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.975 04:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:02.975 [2024-05-15 04:12:50.880896] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:02.975 [2024-05-15 04:12:50.880980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340088 ] 00:12:02.975 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.975 [2024-05-15 04:12:50.953354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.234 [2024-05-15 04:12:51.075357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.168 04:12:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.168 04:12:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:04.168 04:12:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:04.426 Nvme0n1 00:12:04.426 04:12:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:04.685 [ 00:12:04.685 { 00:12:04.685 "name": "Nvme0n1", 00:12:04.685 "aliases": [ 00:12:04.685 "2fa1403c-a12f-4bd8-932b-e884e58f5abe" 00:12:04.685 ], 00:12:04.685 "product_name": "NVMe disk", 00:12:04.685 "block_size": 4096, 00:12:04.685 "num_blocks": 38912, 00:12:04.685 "uuid": "2fa1403c-a12f-4bd8-932b-e884e58f5abe", 00:12:04.685 "assigned_rate_limits": { 00:12:04.685 "rw_ios_per_sec": 0, 00:12:04.685 "rw_mbytes_per_sec": 0, 00:12:04.685 "r_mbytes_per_sec": 0, 00:12:04.685 "w_mbytes_per_sec": 0 00:12:04.685 }, 00:12:04.685 "claimed": false, 00:12:04.685 "zoned": false, 00:12:04.685 "supported_io_types": { 00:12:04.685 "read": true, 00:12:04.685 "write": true, 00:12:04.685 "unmap": true, 00:12:04.685 "write_zeroes": true, 00:12:04.685 "flush": true, 00:12:04.685 "reset": true, 00:12:04.685 "compare": true, 00:12:04.685 "compare_and_write": true, 00:12:04.685 "abort": true, 00:12:04.685 "nvme_admin": true, 00:12:04.685 "nvme_io": true 00:12:04.685 }, 00:12:04.685 "memory_domains": [ 00:12:04.685 { 00:12:04.685 "dma_device_id": "system", 00:12:04.685 "dma_device_type": 1 00:12:04.685 } 00:12:04.685 ], 00:12:04.685 "driver_specific": { 00:12:04.685 "nvme": [ 00:12:04.685 { 00:12:04.685 "trid": { 00:12:04.685 "trtype": "TCP", 00:12:04.685 "adrfam": "IPv4", 00:12:04.685 "traddr": "10.0.0.2", 00:12:04.685 "trsvcid": "4420", 00:12:04.685 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:04.685 }, 00:12:04.685 "ctrlr_data": { 00:12:04.685 "cntlid": 1, 00:12:04.685 "vendor_id": "0x8086", 00:12:04.685 "model_number": "SPDK bdev Controller", 00:12:04.685 "serial_number": "SPDK0", 00:12:04.685 "firmware_revision": "24.05", 00:12:04.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:04.685 "oacs": { 00:12:04.685 "security": 0, 00:12:04.685 "format": 0, 00:12:04.685 "firmware": 0, 00:12:04.685 "ns_manage": 0 00:12:04.685 }, 00:12:04.685 "multi_ctrlr": true, 00:12:04.685 "ana_reporting": false 00:12:04.685 }, 00:12:04.685 "vs": { 00:12:04.685 "nvme_version": "1.3" 00:12:04.685 }, 00:12:04.685 "ns_data": { 00:12:04.685 "id": 1, 00:12:04.685 "can_share": true 00:12:04.685 } 00:12:04.685 } 00:12:04.685 ], 00:12:04.685 "mp_policy": "active_passive" 00:12:04.685 } 00:12:04.685 } 00:12:04.685 ] 00:12:04.685 04:12:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3340354 00:12:04.685 04:12:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:04.685 04:12:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:04.685 Running I/O for 10 seconds... 00:12:05.634 Latency(us) 00:12:05.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.634 Nvme0n1 : 1.00 12893.00 50.36 0.00 0.00 0.00 0.00 0.00 00:12:05.634 =================================================================================================================== 00:12:05.634 Total : 12893.00 50.36 0.00 0.00 0.00 0.00 0.00 00:12:05.634 00:12:06.567 04:12:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:06.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.826 Nvme0n1 : 2.00 13122.50 51.26 0.00 0.00 0.00 0.00 0.00 00:12:06.826 =================================================================================================================== 00:12:06.826 Total : 13122.50 51.26 0.00 0.00 0.00 0.00 0.00 00:12:06.826 00:12:06.826 true 00:12:06.826 04:12:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:06.826 04:12:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:07.085 04:12:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:07.085 04:12:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:07.085 04:12:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3340354 00:12:07.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.650 Nvme0n1 : 3.00 13180.33 51.49 0.00 0.00 0.00 0.00 0.00 00:12:07.650 =================================================================================================================== 00:12:07.650 Total : 13180.33 51.49 0.00 0.00 0.00 0.00 0.00 00:12:07.650 00:12:09.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.024 Nvme0n1 : 4.00 13229.25 51.68 0.00 0.00 0.00 0.00 0.00 00:12:09.024 =================================================================================================================== 00:12:09.024 Total : 13229.25 51.68 0.00 0.00 0.00 0.00 0.00 00:12:09.024 00:12:09.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.590 Nvme0n1 : 5.00 13300.20 51.95 0.00 0.00 0.00 0.00 0.00 00:12:09.590 =================================================================================================================== 00:12:09.590 Total : 13300.20 51.95 0.00 0.00 0.00 0.00 0.00 00:12:09.590 00:12:10.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.964 Nvme0n1 : 6.00 13359.50 52.19 0.00 0.00 0.00 0.00 0.00 00:12:10.964 =================================================================================================================== 00:12:10.964 Total : 13359.50 52.19 0.00 0.00 0.00 0.00 0.00 00:12:10.964 00:12:11.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.899 Nvme0n1 : 7.00 13398.43 52.34 0.00 0.00 0.00 0.00 0.00 00:12:11.899 =================================================================================================================== 00:12:11.899 Total : 13398.43 52.34 0.00 0.00 0.00 0.00 0.00 00:12:11.899 00:12:12.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.833 Nvme0n1 : 8.00 13439.62 52.50 0.00 0.00 0.00 0.00 0.00 00:12:12.833 =================================================================================================================== 00:12:12.833 Total : 13439.62 52.50 0.00 0.00 0.00 0.00 0.00 00:12:12.833 00:12:13.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.768 Nvme0n1 : 9.00 13460.11 52.58 0.00 0.00 0.00 0.00 0.00 00:12:13.768 =================================================================================================================== 00:12:13.768 Total : 13460.11 52.58 0.00 0.00 0.00 0.00 0.00 00:12:13.768 00:12:14.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.704 Nvme0n1 : 10.00 13484.50 52.67 0.00 0.00 0.00 0.00 0.00 00:12:14.704 =================================================================================================================== 00:12:14.704 Total : 13484.50 52.67 0.00 0.00 0.00 0.00 0.00 00:12:14.704 00:12:14.704 00:12:14.704 Latency(us) 00:12:14.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.704 Nvme0n1 : 10.01 13484.11 52.67 0.00 0.00 9483.78 7524.50 19320.98 00:12:14.704 =================================================================================================================== 00:12:14.704 Total : 13484.11 52.67 0.00 0.00 9483.78 7524.50 19320.98 00:12:14.704 0 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3340088 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3340088 ']' 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3340088 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3340088 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3340088' 00:12:14.704 killing process with pid 3340088 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3340088 00:12:14.704 Received shutdown signal, test time was about 10.000000 seconds 00:12:14.704 00:12:14.704 Latency(us) 00:12:14.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.704 =================================================================================================================== 00:12:14.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:14.704 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3340088 00:12:14.962 04:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.220 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:15.486 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:15.486 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3337470 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3337470 00:12:15.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3337470 Killed "${NVMF_APP[@]}" "$@" 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3341684 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3341684 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3341684 ']' 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.747 04:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 [2024-05-15 04:13:03.799008] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:16.044 [2024-05-15 04:13:03.799083] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.044 [2024-05-15 04:13:03.876730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.044 [2024-05-15 04:13:03.991151] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.044 [2024-05-15 04:13:03.991205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.044 [2024-05-15 04:13:03.991221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.044 [2024-05-15 04:13:03.991243] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.044 [2024-05-15 04:13:03.991254] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.044 [2024-05-15 04:13:03.991283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.979 04:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:16.979 04:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:16.979 04:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:16.979 04:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.979 04:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:16.979 04:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.979 04:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:17.237 [2024-05-15 04:13:05.026496] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:17.237 [2024-05-15 04:13:05.026631] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:17.237 [2024-05-15 04:13:05.026692] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2fa1403c-a12f-4bd8-932b-e884e58f5abe 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=2fa1403c-a12f-4bd8-932b-e884e58f5abe 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:17.237 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:17.495 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2fa1403c-a12f-4bd8-932b-e884e58f5abe -t 2000 00:12:17.753 [ 00:12:17.753 { 00:12:17.753 "name": "2fa1403c-a12f-4bd8-932b-e884e58f5abe", 00:12:17.753 "aliases": [ 00:12:17.753 "lvs/lvol" 00:12:17.753 ], 00:12:17.753 "product_name": "Logical Volume", 00:12:17.753 "block_size": 4096, 00:12:17.753 "num_blocks": 38912, 00:12:17.753 "uuid": "2fa1403c-a12f-4bd8-932b-e884e58f5abe", 00:12:17.753 "assigned_rate_limits": { 00:12:17.753 "rw_ios_per_sec": 0, 00:12:17.753 "rw_mbytes_per_sec": 0, 00:12:17.753 "r_mbytes_per_sec": 0, 00:12:17.753 "w_mbytes_per_sec": 0 00:12:17.753 }, 00:12:17.753 "claimed": false, 00:12:17.753 "zoned": false, 00:12:17.753 "supported_io_types": { 00:12:17.753 "read": true, 00:12:17.753 "write": true, 00:12:17.753 "unmap": true, 00:12:17.753 "write_zeroes": true, 00:12:17.753 "flush": false, 00:12:17.753 "reset": true, 00:12:17.753 "compare": false, 00:12:17.753 "compare_and_write": false, 00:12:17.753 "abort": false, 00:12:17.753 "nvme_admin": false, 00:12:17.753 "nvme_io": false 00:12:17.753 }, 00:12:17.753 "driver_specific": { 00:12:17.753 "lvol": { 00:12:17.753 "lvol_store_uuid": "8be6b7cf-795e-4243-870a-bba7117455fa", 00:12:17.753 "base_bdev": "aio_bdev", 00:12:17.753 "thin_provision": false, 00:12:17.753 "num_allocated_clusters": 38, 00:12:17.753 "snapshot": false, 00:12:17.753 "clone": false, 00:12:17.753 "esnap_clone": false 00:12:17.753 } 00:12:17.753 } 00:12:17.753 } 00:12:17.753 ] 00:12:17.753 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:17.753 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:17.753 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:18.011 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:18.011 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:18.011 04:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:18.270 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:18.270 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:18.528 [2024-05-15 04:13:06.339572] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:18.528 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:18.786 request: 00:12:18.786 { 00:12:18.786 "uuid": "8be6b7cf-795e-4243-870a-bba7117455fa", 00:12:18.786 "method": "bdev_lvol_get_lvstores", 00:12:18.786 "req_id": 1 00:12:18.786 } 00:12:18.786 Got JSON-RPC error response 00:12:18.786 response: 00:12:18.786 { 00:12:18.786 "code": -19, 00:12:18.786 "message": "No such device" 00:12:18.786 } 00:12:18.786 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:18.786 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.786 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.786 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.786 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:19.045 aio_bdev 00:12:19.045 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2fa1403c-a12f-4bd8-932b-e884e58f5abe 00:12:19.045 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=2fa1403c-a12f-4bd8-932b-e884e58f5abe 00:12:19.045 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:19.045 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:19.045 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:19.045 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:19.045 04:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:19.303 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2fa1403c-a12f-4bd8-932b-e884e58f5abe -t 2000 00:12:19.561 [ 00:12:19.561 { 00:12:19.561 "name": "2fa1403c-a12f-4bd8-932b-e884e58f5abe", 00:12:19.561 "aliases": [ 00:12:19.561 "lvs/lvol" 00:12:19.561 ], 00:12:19.561 "product_name": "Logical Volume", 00:12:19.561 "block_size": 4096, 00:12:19.561 "num_blocks": 38912, 00:12:19.561 "uuid": "2fa1403c-a12f-4bd8-932b-e884e58f5abe", 00:12:19.561 "assigned_rate_limits": { 00:12:19.561 "rw_ios_per_sec": 0, 00:12:19.561 "rw_mbytes_per_sec": 0, 00:12:19.561 "r_mbytes_per_sec": 0, 00:12:19.561 "w_mbytes_per_sec": 0 00:12:19.561 }, 00:12:19.561 "claimed": false, 00:12:19.561 "zoned": false, 00:12:19.561 "supported_io_types": { 00:12:19.561 "read": true, 00:12:19.561 "write": true, 00:12:19.561 "unmap": true, 00:12:19.561 "write_zeroes": true, 00:12:19.561 "flush": false, 00:12:19.561 "reset": true, 00:12:19.561 "compare": false, 00:12:19.561 "compare_and_write": false, 00:12:19.561 "abort": false, 00:12:19.561 "nvme_admin": false, 00:12:19.561 "nvme_io": false 00:12:19.561 }, 00:12:19.561 "driver_specific": { 00:12:19.561 "lvol": { 00:12:19.561 "lvol_store_uuid": "8be6b7cf-795e-4243-870a-bba7117455fa", 00:12:19.561 "base_bdev": "aio_bdev", 00:12:19.561 "thin_provision": false, 00:12:19.561 "num_allocated_clusters": 38, 00:12:19.561 "snapshot": false, 00:12:19.561 "clone": false, 00:12:19.561 "esnap_clone": false 00:12:19.561 } 00:12:19.561 } 00:12:19.561 } 00:12:19.561 ] 00:12:19.561 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:19.561 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:19.561 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:19.818 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:19.818 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:19.818 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:20.076 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:20.076 04:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2fa1403c-a12f-4bd8-932b-e884e58f5abe 00:12:20.335 04:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8be6b7cf-795e-4243-870a-bba7117455fa 00:12:20.593 04:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:20.852 00:12:20.852 real 0m20.493s 00:12:20.852 user 0m50.216s 00:12:20.852 sys 0m5.162s 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:20.852 ************************************ 00:12:20.852 END TEST lvs_grow_dirty 00:12:20.852 ************************************ 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:20.852 nvmf_trace.0 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.852 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.110 rmmod nvme_tcp 00:12:21.110 rmmod nvme_fabrics 00:12:21.110 rmmod nvme_keyring 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3341684 ']' 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3341684 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3341684 ']' 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3341684 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3341684 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3341684' 00:12:21.110 killing process with pid 3341684 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3341684 00:12:21.110 04:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3341684 00:12:21.368 04:13:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.368 04:13:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.368 04:13:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.368 04:13:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.368 04:13:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.368 04:13:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.369 04:13:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.369 04:13:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.270 04:13:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.270 00:12:23.270 real 0m44.416s 00:12:23.270 user 1m14.107s 00:12:23.270 sys 0m9.244s 00:12:23.270 04:13:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.270 04:13:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:23.270 ************************************ 00:12:23.270 END TEST nvmf_lvs_grow 00:12:23.270 ************************************ 00:12:23.270 04:13:11 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:23.270 04:13:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:23.270 04:13:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.271 04:13:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.528 ************************************ 00:12:23.528 START TEST nvmf_bdev_io_wait 00:12:23.529 ************************************ 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:23.529 * Looking for test storage... 00:12:23.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.529 04:13:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:12:26.057 00:12:26.057 --- 10.0.0.2 ping statistics --- 00:12:26.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.057 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:26.057 00:12:26.057 --- 10.0.0.1 ping statistics --- 00:12:26.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.057 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3344624 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3344624 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3344624 ']' 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.057 04:13:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:26.057 [2024-05-15 04:13:14.004086] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:26.057 [2024-05-15 04:13:14.004160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.057 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.315 [2024-05-15 04:13:14.082076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.315 [2024-05-15 04:13:14.194653] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.315 [2024-05-15 04:13:14.194714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.315 [2024-05-15 04:13:14.194742] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.315 [2024-05-15 04:13:14.194754] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.315 [2024-05-15 04:13:14.194764] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.315 [2024-05-15 04:13:14.194831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.315 [2024-05-15 04:13:14.194863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.315 [2024-05-15 04:13:14.194921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.315 [2024-05-15 04:13:14.194923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.249 04:13:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.249 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.249 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.249 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.250 [2024-05-15 04:13:15.062558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.250 Malloc0 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.250 [2024-05-15 04:13:15.128001] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:27.250 [2024-05-15 04:13:15.128295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3344780 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3344782 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:27.250 { 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme$subsystem", 00:12:27.250 "trtype": "$TEST_TRANSPORT", 00:12:27.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "$NVMF_PORT", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:27.250 "hdgst": ${hdgst:-false}, 00:12:27.250 "ddgst": ${ddgst:-false} 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 } 00:12:27.250 EOF 00:12:27.250 )") 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3344784 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:27.250 { 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme$subsystem", 00:12:27.250 "trtype": "$TEST_TRANSPORT", 00:12:27.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "$NVMF_PORT", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:27.250 "hdgst": ${hdgst:-false}, 00:12:27.250 "ddgst": ${ddgst:-false} 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 } 00:12:27.250 EOF 00:12:27.250 )") 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3344787 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:27.250 { 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme$subsystem", 00:12:27.250 "trtype": "$TEST_TRANSPORT", 00:12:27.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "$NVMF_PORT", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:27.250 "hdgst": ${hdgst:-false}, 00:12:27.250 "ddgst": ${ddgst:-false} 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 } 00:12:27.250 EOF 00:12:27.250 )") 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:27.250 { 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme$subsystem", 00:12:27.250 "trtype": "$TEST_TRANSPORT", 00:12:27.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "$NVMF_PORT", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:27.250 "hdgst": ${hdgst:-false}, 00:12:27.250 "ddgst": ${ddgst:-false} 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 } 00:12:27.250 EOF 00:12:27.250 )") 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3344780 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme1", 00:12:27.250 "trtype": "tcp", 00:12:27.250 "traddr": "10.0.0.2", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "4420", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:27.250 "hdgst": false, 00:12:27.250 "ddgst": false 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 }' 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme1", 00:12:27.250 "trtype": "tcp", 00:12:27.250 "traddr": "10.0.0.2", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "4420", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:27.250 "hdgst": false, 00:12:27.250 "ddgst": false 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 }' 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme1", 00:12:27.250 "trtype": "tcp", 00:12:27.250 "traddr": "10.0.0.2", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "4420", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:27.250 "hdgst": false, 00:12:27.250 "ddgst": false 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 }' 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:27.250 04:13:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:27.250 "params": { 00:12:27.250 "name": "Nvme1", 00:12:27.250 "trtype": "tcp", 00:12:27.250 "traddr": "10.0.0.2", 00:12:27.250 "adrfam": "ipv4", 00:12:27.250 "trsvcid": "4420", 00:12:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:27.250 "hdgst": false, 00:12:27.250 "ddgst": false 00:12:27.250 }, 00:12:27.250 "method": "bdev_nvme_attach_controller" 00:12:27.250 }' 00:12:27.250 [2024-05-15 04:13:15.173780] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:27.250 [2024-05-15 04:13:15.173780] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:27.251 [2024-05-15 04:13:15.173878] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 04:13:15.173879] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:27.251 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:27.251 [2024-05-15 04:13:15.174259] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:27.251 [2024-05-15 04:13:15.174260] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:27.251 [2024-05-15 04:13:15.174330] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 04:13:15.174330] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:27.251 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:27.251 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.509 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.509 [2024-05-15 04:13:15.366149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.509 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.509 [2024-05-15 04:13:15.464373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:27.509 [2024-05-15 04:13:15.467027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.509 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.767 [2024-05-15 04:13:15.543176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.767 [2024-05-15 04:13:15.569313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:27.767 [2024-05-15 04:13:15.618136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.767 [2024-05-15 04:13:15.639627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:27.767 [2024-05-15 04:13:15.713688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:28.025 Running I/O for 1 seconds... 00:12:28.025 Running I/O for 1 seconds... 00:12:28.025 Running I/O for 1 seconds... 00:12:28.025 Running I/O for 1 seconds... 00:12:28.959 00:12:28.959 Latency(us) 00:12:28.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.960 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:28.960 Nvme1n1 : 1.02 6213.68 24.27 0.00 0.00 20420.22 4199.16 32816.55 00:12:28.960 =================================================================================================================== 00:12:28.960 Total : 6213.68 24.27 0.00 0.00 20420.22 4199.16 32816.55 00:12:28.960 00:12:28.960 Latency(us) 00:12:28.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.960 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:28.960 Nvme1n1 : 1.01 9989.84 39.02 0.00 0.00 12751.66 9369.22 24660.95 00:12:28.960 =================================================================================================================== 00:12:28.960 Total : 9989.84 39.02 0.00 0.00 12751.66 9369.22 24660.95 00:12:28.960 00:12:28.960 Latency(us) 00:12:28.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.960 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:28.960 Nvme1n1 : 1.01 6253.73 24.43 0.00 0.00 20382.49 7524.50 36505.98 00:12:28.960 =================================================================================================================== 00:12:28.960 Total : 6253.73 24.43 0.00 0.00 20382.49 7524.50 36505.98 00:12:28.960 00:12:28.960 Latency(us) 00:12:28.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.960 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:28.960 Nvme1n1 : 1.00 141522.15 552.82 0.00 0.00 900.93 268.52 1383.54 00:12:28.960 =================================================================================================================== 00:12:28.960 Total : 141522.15 552.82 0.00 0.00 900.93 268.52 1383.54 00:12:29.218 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3344782 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3344784 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3344787 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.477 rmmod nvme_tcp 00:12:29.477 rmmod nvme_fabrics 00:12:29.477 rmmod nvme_keyring 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3344624 ']' 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3344624 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3344624 ']' 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3344624 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3344624 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3344624' 00:12:29.477 killing process with pid 3344624 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3344624 00:12:29.477 [2024-05-15 04:13:17.370678] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:29.477 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3344624 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.735 04:13:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.669 04:13:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.669 00:12:31.669 real 0m8.360s 00:12:31.669 user 0m19.248s 00:12:31.669 sys 0m3.967s 00:12:31.669 04:13:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.669 04:13:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.669 ************************************ 00:12:31.669 END TEST nvmf_bdev_io_wait 00:12:31.669 ************************************ 00:12:31.927 04:13:19 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:31.927 04:13:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:31.927 04:13:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.928 04:13:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.928 ************************************ 00:12:31.928 START TEST nvmf_queue_depth 00:12:31.928 ************************************ 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:31.928 * Looking for test storage... 00:12:31.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.928 04:13:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:34.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:34.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.461 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:34.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:34.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:12:34.462 00:12:34.462 --- 10.0.0.2 ping statistics --- 00:12:34.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.462 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:12:34.462 00:12:34.462 --- 10.0.0.1 ping statistics --- 00:12:34.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.462 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3347301 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3347301 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3347301 ']' 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.462 04:13:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:34.462 [2024-05-15 04:13:22.396540] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:34.462 [2024-05-15 04:13:22.396620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.462 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.720 [2024-05-15 04:13:22.476439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.720 [2024-05-15 04:13:22.591671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.720 [2024-05-15 04:13:22.591743] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.721 [2024-05-15 04:13:22.591759] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.721 [2024-05-15 04:13:22.591772] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.721 [2024-05-15 04:13:22.591784] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.721 [2024-05-15 04:13:22.591821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.654 [2024-05-15 04:13:23.359668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.654 Malloc0 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.654 [2024-05-15 04:13:23.419295] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:35.654 [2024-05-15 04:13:23.419578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3347454 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3347454 /var/tmp/bdevperf.sock 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3347454 ']' 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:35.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:35.654 04:13:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.654 [2024-05-15 04:13:23.463122] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:35.655 [2024-05-15 04:13:23.463196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347454 ] 00:12:35.655 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.655 [2024-05-15 04:13:23.542798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.913 [2024-05-15 04:13:23.680088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.480 04:13:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:36.480 04:13:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:36.480 04:13:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:36.480 04:13:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.480 04:13:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.738 NVMe0n1 00:12:36.738 04:13:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.738 04:13:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:36.738 Running I/O for 10 seconds... 00:12:48.944 00:12:48.944 Latency(us) 00:12:48.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.944 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:48.944 Verification LBA range: start 0x0 length 0x4000 00:12:48.944 NVMe0n1 : 10.09 8222.33 32.12 0.00 0.00 124063.41 24660.95 89711.50 00:12:48.944 =================================================================================================================== 00:12:48.944 Total : 8222.33 32.12 0.00 0.00 124063.41 24660.95 89711.50 00:12:48.944 0 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3347454 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3347454 ']' 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3347454 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3347454 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3347454' 00:12:48.944 killing process with pid 3347454 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3347454 00:12:48.944 Received shutdown signal, test time was about 10.000000 seconds 00:12:48.944 00:12:48.944 Latency(us) 00:12:48.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.944 =================================================================================================================== 00:12:48.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.944 04:13:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3347454 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.944 rmmod nvme_tcp 00:12:48.944 rmmod nvme_fabrics 00:12:48.944 rmmod nvme_keyring 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3347301 ']' 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3347301 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3347301 ']' 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3347301 00:12:48.944 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3347301 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3347301' 00:12:48.945 killing process with pid 3347301 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3347301 00:12:48.945 [2024-05-15 04:13:35.159282] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3347301 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.945 04:13:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.511 04:13:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.511 00:12:49.511 real 0m17.792s 00:12:49.511 user 0m25.025s 00:12:49.511 sys 0m3.486s 00:12:49.511 04:13:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.511 04:13:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:49.511 ************************************ 00:12:49.511 END TEST nvmf_queue_depth 00:12:49.511 ************************************ 00:12:49.770 04:13:37 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:49.770 04:13:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:49.770 04:13:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.770 04:13:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.770 ************************************ 00:12:49.770 START TEST nvmf_target_multipath 00:12:49.771 ************************************ 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:49.771 * Looking for test storage... 00:12:49.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.771 04:13:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:52.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:52.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:52.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:52.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.305 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.306 04:13:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:52.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:12:52.306 00:12:52.306 --- 10.0.0.2 ping statistics --- 00:12:52.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.306 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:12:52.306 00:12:52.306 --- 10.0.0.1 ping statistics --- 00:12:52.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.306 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:52.306 only one NIC for nvmf test 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.306 rmmod nvme_tcp 00:12:52.306 rmmod nvme_fabrics 00:12:52.306 rmmod nvme_keyring 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.306 04:13:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.213 00:12:54.213 real 0m4.638s 00:12:54.213 user 0m0.899s 00:12:54.213 sys 0m1.738s 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:54.213 04:13:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:54.213 ************************************ 00:12:54.213 END TEST nvmf_target_multipath 00:12:54.213 ************************************ 00:12:54.471 04:13:42 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:54.471 04:13:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:54.471 04:13:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:54.471 04:13:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.471 ************************************ 00:12:54.471 START TEST nvmf_zcopy 00:12:54.471 ************************************ 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:54.471 * Looking for test storage... 00:12:54.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.471 04:13:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:57.019 00:12:57.019 --- 10.0.0.2 ping statistics --- 00:12:57.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.019 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:12:57.019 00:12:57.019 --- 10.0.0.1 ping statistics --- 00:12:57.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.019 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.019 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3353337 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3353337 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3353337 ']' 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.020 04:13:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.020 [2024-05-15 04:13:44.942553] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:57.020 [2024-05-15 04:13:44.942635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.020 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.020 [2024-05-15 04:13:45.023803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.278 [2024-05-15 04:13:45.141115] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.278 [2024-05-15 04:13:45.141168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.278 [2024-05-15 04:13:45.141197] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.278 [2024-05-15 04:13:45.141219] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.278 [2024-05-15 04:13:45.141228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.278 [2024-05-15 04:13:45.141271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 [2024-05-15 04:13:45.965987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 [2024-05-15 04:13:45.981924] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:58.212 [2024-05-15 04:13:45.982227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.212 04:13:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 malloc0 00:12:58.212 04:13:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.212 04:13:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:58.212 04:13:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.212 04:13:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 04:13:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:58.213 { 00:12:58.213 "params": { 00:12:58.213 "name": "Nvme$subsystem", 00:12:58.213 "trtype": "$TEST_TRANSPORT", 00:12:58.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:58.213 "adrfam": "ipv4", 00:12:58.213 "trsvcid": "$NVMF_PORT", 00:12:58.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:58.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:58.213 "hdgst": ${hdgst:-false}, 00:12:58.213 "ddgst": ${ddgst:-false} 00:12:58.213 }, 00:12:58.213 "method": "bdev_nvme_attach_controller" 00:12:58.213 } 00:12:58.213 EOF 00:12:58.213 )") 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:58.213 04:13:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:58.213 "params": { 00:12:58.213 "name": "Nvme1", 00:12:58.213 "trtype": "tcp", 00:12:58.213 "traddr": "10.0.0.2", 00:12:58.213 "adrfam": "ipv4", 00:12:58.213 "trsvcid": "4420", 00:12:58.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:58.213 "hdgst": false, 00:12:58.213 "ddgst": false 00:12:58.213 }, 00:12:58.213 "method": "bdev_nvme_attach_controller" 00:12:58.213 }' 00:12:58.213 [2024-05-15 04:13:46.064286] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:12:58.213 [2024-05-15 04:13:46.064367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353493 ] 00:12:58.213 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.213 [2024-05-15 04:13:46.144891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.470 [2024-05-15 04:13:46.264110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.728 Running I/O for 10 seconds... 00:13:08.696 00:13:08.696 Latency(us) 00:13:08.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.696 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:08.696 Verification LBA range: start 0x0 length 0x1000 00:13:08.696 Nvme1n1 : 10.01 6073.94 47.45 0.00 0.00 21015.75 1674.81 35340.89 00:13:08.696 =================================================================================================================== 00:13:08.696 Total : 6073.94 47.45 0.00 0.00 21015.75 1674.81 35340.89 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3354690 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:08.954 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:08.954 { 00:13:08.954 "params": { 00:13:08.954 "name": "Nvme$subsystem", 00:13:08.954 "trtype": "$TEST_TRANSPORT", 00:13:08.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.954 "adrfam": "ipv4", 00:13:08.954 "trsvcid": "$NVMF_PORT", 00:13:08.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.955 "hdgst": ${hdgst:-false}, 00:13:08.955 "ddgst": ${ddgst:-false} 00:13:08.955 }, 00:13:08.955 "method": "bdev_nvme_attach_controller" 00:13:08.955 } 00:13:08.955 EOF 00:13:08.955 )") 00:13:08.955 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:08.955 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:08.955 [2024-05-15 04:13:56.924899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.955 [2024-05-15 04:13:56.924970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.955 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:08.955 04:13:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:08.955 "params": { 00:13:08.955 "name": "Nvme1", 00:13:08.955 "trtype": "tcp", 00:13:08.955 "traddr": "10.0.0.2", 00:13:08.955 "adrfam": "ipv4", 00:13:08.955 "trsvcid": "4420", 00:13:08.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.955 "hdgst": false, 00:13:08.955 "ddgst": false 00:13:08.955 }, 00:13:08.955 "method": "bdev_nvme_attach_controller" 00:13:08.955 }' 00:13:08.955 [2024-05-15 04:13:56.932845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.955 [2024-05-15 04:13:56.932867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.955 [2024-05-15 04:13:56.940866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.955 [2024-05-15 04:13:56.940887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.955 [2024-05-15 04:13:56.948887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.955 [2024-05-15 04:13:56.948908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.955 [2024-05-15 04:13:56.956951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.955 [2024-05-15 04:13:56.956974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.955 [2024-05-15 04:13:56.960976] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:08.955 [2024-05-15 04:13:56.961050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354690 ] 00:13:08.955 [2024-05-15 04:13:56.964956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.955 [2024-05-15 04:13:56.964980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:56.972979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:56.973003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:56.981018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:56.981041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:56.989009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:56.989031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.214 [2024-05-15 04:13:56.997020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:56.997042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.005068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.005091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.013075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.013097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.021095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.021118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.029116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.029138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.035401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.214 [2024-05-15 04:13:57.037136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.037158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.045206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.045270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.053197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.053251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.061221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.061250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.069244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.069276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.077266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.077301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.085300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.085321] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.093306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.093327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.101386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.101422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.109365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.109393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.117366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.117387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.125388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.125408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.133409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.133429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.141446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.141466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.149451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.149476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.151922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.214 [2024-05-15 04:13:57.157470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.157491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.165504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.165529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.173565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.173600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.181582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.181620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.189608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.189648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.197633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.197674] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.205653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.205691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.213679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.213721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.214 [2024-05-15 04:13:57.221672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.214 [2024-05-15 04:13:57.221698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.229725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.229765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.237751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.237796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.245767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.245807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.253751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.253776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.261774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.261798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.269814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.269843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.277824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.277853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.285849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.285876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.293912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.293957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.301896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.301924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.309920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.309954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.317946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.317985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.325983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.326005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.334001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.334022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.342017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.342038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.350036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.350059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.358053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.358074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.366066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.366087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.374088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.374109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.382111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.382132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.390138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.390161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.398160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.398182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.406182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.406219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.414203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.414243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.422250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.422275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.430275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.430300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.438303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.438329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.446314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.446345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.454342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.454371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 Running I/O for 5 seconds... 00:13:09.473 [2024-05-15 04:13:57.462358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.462384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.475905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.475941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.473 [2024-05-15 04:13:57.486054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.473 [2024-05-15 04:13:57.486082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.496702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.496730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.509019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.509046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.518658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.518686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.529540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.529567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.540275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.540303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.550450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.550477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.561268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.561296] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.573037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.573065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.582616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.582644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.593758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.593786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.603509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.603537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.614251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.614278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.627043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.627070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.636441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.636470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.647003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.647041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.657051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.657079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.666612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.666639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.676840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.676867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.686916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.686956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.697516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.697543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.707662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.707690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.732 [2024-05-15 04:13:57.718311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.732 [2024-05-15 04:13:57.718338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.733 [2024-05-15 04:13:57.730795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.733 [2024-05-15 04:13:57.730822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.733 [2024-05-15 04:13:57.740241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.733 [2024-05-15 04:13:57.740268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.991 [2024-05-15 04:13:57.750798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.991 [2024-05-15 04:13:57.750825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.991 [2024-05-15 04:13:57.761038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.991 [2024-05-15 04:13:57.761064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.991 [2024-05-15 04:13:57.770862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.991 [2024-05-15 04:13:57.770889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.991 [2024-05-15 04:13:57.781489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.991 [2024-05-15 04:13:57.781516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.991 [2024-05-15 04:13:57.791164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.991 [2024-05-15 04:13:57.791191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.991 [2024-05-15 04:13:57.802220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.991 [2024-05-15 04:13:57.802247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.991 [2024-05-15 04:13:57.812186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.991 [2024-05-15 04:13:57.812221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.823235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.823263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.832976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.833004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.843997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.844031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.853543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.853570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.864644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.864672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.874834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.874861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.884798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.884825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.895980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.896008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.905526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.905553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.916793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.916820] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.926555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.926582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.937200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.937228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.947517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.947545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.957928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.957963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.970467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.970495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.979780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.979808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:57.992905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:57.992939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.992 [2024-05-15 04:13:58.002559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.992 [2024-05-15 04:13:58.002586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.013426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.013453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.023294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.023322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.034324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.034351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.046713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.046740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.055964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.055991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.066534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.066562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.076661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.076688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.086908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.086944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.097154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.097182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.107577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.107605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.119471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.119499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.130504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.130532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.140383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.140410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.250 [2024-05-15 04:13:58.151484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.250 [2024-05-15 04:13:58.151512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.161901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.161939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.171644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.171671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.183140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.183167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.192986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.193013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.202853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.202881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.213123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.213150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.222604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.222631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.233072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.233100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.243578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.243606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.251 [2024-05-15 04:13:58.254053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.251 [2024-05-15 04:13:58.254081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.266411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.266439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.275927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.275962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.286713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.286741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.296333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.296360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.307079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.307106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.319178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.319205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.328694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.328721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.339760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.339787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.350431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.350458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.360816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.360843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.373517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.373544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.383018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.383045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.393135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.393162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.403374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.403400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.413627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.413654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.423593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.423619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.509 [2024-05-15 04:13:58.434223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.509 [2024-05-15 04:13:58.434249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.444148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.444176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.454585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.454612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.468433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.468461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.478591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.478618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.489260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.489287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.500132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.500160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.509999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.510027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.510 [2024-05-15 04:13:58.520377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.510 [2024-05-15 04:13:58.520404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.533294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.533321] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.543638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.543665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.554125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.554152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.564501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.564528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.574120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.574148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.584701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.584729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.594619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.594646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.605213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.605240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.617564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.617591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.629012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.629039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.637912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.637953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.648500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.648528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.658502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.658530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.668831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.668858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.679458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.679485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.689918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.689953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.699511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.699538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.710381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.710407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.720260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.720287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.730613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.730640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.743049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.743077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.752497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.752525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.763428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.768 [2024-05-15 04:13:58.763455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.768 [2024-05-15 04:13:58.775550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.769 [2024-05-15 04:13:58.775577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.784462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.784489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.797157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.797185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.806937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.806967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.817790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.817818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.829541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.829568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.840878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.840911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.850591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.850618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.861199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.861227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.871252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.871279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.882250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.882278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.892287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.892314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.903299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.903327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.914190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.914217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.923726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.923754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.935125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.935153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.945291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.945318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.955173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.955200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.966101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.966128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.976729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.976757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.986924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.986970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:58.999127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:58.999154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:59.008255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:59.008282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:59.019280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:59.019308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:59.029969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:59.030001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.027 [2024-05-15 04:13:59.039245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.027 [2024-05-15 04:13:59.039280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.050506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.050533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.060292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.060320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.070768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.070795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.080215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.080243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.090592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.090620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.101440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.101468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.112064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.112093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.122697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.122724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.134661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.134704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.143875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.143903] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.154845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.154873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.164624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.164653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.175023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.175050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.184709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.184736] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.196085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.196112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.207656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.207684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.216994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.217021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.228194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.228223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.238473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.238506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.248852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.248879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.259428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.259456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.285 [2024-05-15 04:13:59.270011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.285 [2024-05-15 04:13:59.270039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.286 [2024-05-15 04:13:59.282470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.286 [2024-05-15 04:13:59.282498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.286 [2024-05-15 04:13:59.292032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.286 [2024-05-15 04:13:59.292060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.302567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.302595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.314854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.314882] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.323714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.323742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.334892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.334920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.344860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.344888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.355924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.355959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.366132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.366159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.376406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.376434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.388884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.388911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.398307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.398333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.409082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.409109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.419245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.419272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.429490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.429517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.439574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.439607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.450680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.450707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.460664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.460691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.471368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.471395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.482117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.482144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.492047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.492075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.544 [2024-05-15 04:13:59.502680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.544 [2024-05-15 04:13:59.502707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.545 [2024-05-15 04:13:59.512373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.545 [2024-05-15 04:13:59.512400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.545 [2024-05-15 04:13:59.523469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.545 [2024-05-15 04:13:59.523496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.545 [2024-05-15 04:13:59.534077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.545 [2024-05-15 04:13:59.534104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.545 [2024-05-15 04:13:59.544442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.545 [2024-05-15 04:13:59.544469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.545 [2024-05-15 04:13:59.554656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.545 [2024-05-15 04:13:59.554684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.565329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.565356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.577345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.577372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.585879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.585906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.598835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.598862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.608533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.608560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.619445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.619472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.629131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.629158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.639668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.639695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.649773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.649800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.659815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.659843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.669680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.669707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.680614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.680641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.690170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.690197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.700132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.700160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.710959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.710986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.720842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.720869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.730477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.730503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.741492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.741519] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.750901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.750936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.761694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.761723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.771787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.771814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.781532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.781559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.792280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.792308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.802542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.802570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.803 [2024-05-15 04:13:59.813302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.803 [2024-05-15 04:13:59.813329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.823735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.823762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.834313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.834340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.844839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.844866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.855249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.855275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.866401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.866428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.876886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.876914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.887475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.887502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.896862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.896889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.908163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.908190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.918220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.918262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.928998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.929025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.940687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.940715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.949893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.949921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.962562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.962588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.971983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.972011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.983231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.983257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:13:59.995035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:13:59.995062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:14:00.004493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:14:00.004521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:14:00.014454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:14:00.014483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:14:00.025722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:14:00.025751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:14:00.035971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:14:00.035999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:14:00.046266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:14:00.046293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:14:00.057479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:14:00.057507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.062 [2024-05-15 04:14:00.067539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.062 [2024-05-15 04:14:00.067565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.320 [2024-05-15 04:14:00.078630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.320 [2024-05-15 04:14:00.078657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.320 [2024-05-15 04:14:00.087877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.320 [2024-05-15 04:14:00.087904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.320 [2024-05-15 04:14:00.098888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.320 [2024-05-15 04:14:00.098915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.320 [2024-05-15 04:14:00.108766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.320 [2024-05-15 04:14:00.108793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.320 [2024-05-15 04:14:00.119306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.320 [2024-05-15 04:14:00.119333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.320 [2024-05-15 04:14:00.131382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.320 [2024-05-15 04:14:00.131410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.140680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.140708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.151592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.151620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.161629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.161657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.172957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.172991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.183791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.183818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.193556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.193583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.203596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.203623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.212891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.212923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.223469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.223504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.233689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.233716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.243401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.243428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.254578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.254606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.264823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.264850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.275981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.276008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.285844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.285871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.297052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.297079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.306858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.306885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.317924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.317962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.321 [2024-05-15 04:14:00.328861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.321 [2024-05-15 04:14:00.328888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.339035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.339074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.349728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.349756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.360033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.360062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.372963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.372991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.382378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.382407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.393139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.393167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.403688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.403715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.417244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.417271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.426549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.426583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.437901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.437937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.447644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.579 [2024-05-15 04:14:00.447672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.579 [2024-05-15 04:14:00.458707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.458735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.468844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.468871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.479703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.479731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.489719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.489746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.500765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.500792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.510458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.510485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.521353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.521380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.531635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.531661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.542444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.542471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.552286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.552313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.563116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.563143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.573174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.573201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.583432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.583459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.580 [2024-05-15 04:14:00.594227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.580 [2024-05-15 04:14:00.594269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.603963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.603991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.615280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.615307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.625194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.625232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.636113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.636141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.646024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.646066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.656164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.656192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.666246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.666274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.676478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.676505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.686350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.686377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.697296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.697324] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.707503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.707530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.717889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.717916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.728461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.728488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.738776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.738803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.838 [2024-05-15 04:14:00.751746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.838 [2024-05-15 04:14:00.751773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.761047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.761075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.771835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.771863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.781590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.781618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.792453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.792480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.802062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.802088] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.813234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.813261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.823460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.823495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.833908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.833942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.839 [2024-05-15 04:14:00.844353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.839 [2024-05-15 04:14:00.844380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.854984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.855011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.865270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.865298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.875871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.875900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.886185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.886213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.896471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.896499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.906039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.906066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.917351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.917377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.927749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.927776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.938220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.938247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.947514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.947541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.959090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.959117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.968964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.968992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.980701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.980728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:00.990704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:00.990732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.001776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.001803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.011921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.011956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.021787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.021821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.032657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.032684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.043543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.043570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.054472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.054499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.064894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.064921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.075628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.075655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.086136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.086163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.096689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.096716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.098 [2024-05-15 04:14:01.106586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.098 [2024-05-15 04:14:01.106613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.117267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.117295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.127256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.127283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.138210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.138237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.147919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.147954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.159192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.159219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.169189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.169216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.180078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.180105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.189968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.189995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.200611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.200638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.212679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.212706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.222432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.222459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.232451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.232478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.242965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.242992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.252832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.252860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.263466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.263494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.276570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.276598] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.287418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.287446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.296508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.296536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.307780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.307807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.319897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.319953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.328674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.328701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.339918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.339954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.349681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.349708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.360461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.360489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.356 [2024-05-15 04:14:01.370257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.356 [2024-05-15 04:14:01.370284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.380827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.380855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.390963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.390990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.401962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.401989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.411555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.411582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.422341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.422368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.432938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.432965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.443461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.443488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.455888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.455915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.465249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.465276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.476256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.476283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.485901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.485947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.496891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.496926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.509051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.509079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.527898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.527949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.537851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.537879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.548893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.548939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.558877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.558904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.569883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.569911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.580606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.580634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.590651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.590679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.601216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.601249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.611736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.611763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.615 [2024-05-15 04:14:01.621318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.615 [2024-05-15 04:14:01.621345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.632539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.632566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.642890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.642917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.653031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.653058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.663787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.663814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.674273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.674300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.686679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.686706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.696598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.696626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.707443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.707470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.717324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.717352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.728298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.728325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.738120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.738146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.748916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.748956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.758878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.758905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.769877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.769904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.779775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.779802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.790589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.790616] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.802463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.802491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.813475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.813502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.823562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.823611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.834073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.834100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.844467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.844494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.854718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.854745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.864869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.864896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.874546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.874573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.873 [2024-05-15 04:14:01.885631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.873 [2024-05-15 04:14:01.885659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.895536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.895564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.906049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.906082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.916621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.916649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.926626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.926653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.938424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.938452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.948299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.948326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.959521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.959548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.969483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.969510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.979629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.979656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:01.992513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:01.992540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.001685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.001712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.012606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.012633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.022568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.022602] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.033371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.033398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.043271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.043298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.054527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.054554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.064579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.064606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.075269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.075297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.085338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.085365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.096212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.096240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.105965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.105993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.116715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.116743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.126727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.126754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.132 [2024-05-15 04:14:02.136819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.132 [2024-05-15 04:14:02.136861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.147762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.147789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.157787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.157813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.167649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.167676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.178887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.178914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.188902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.188956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.199610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.199637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.210462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.210489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.220188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.220222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.231308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.231336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.241142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.241169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.252043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.252070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.262160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.262188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.273035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.273063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.282481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.282507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.293019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.293046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.305116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.305143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.314666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.314693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.325044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.325071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.334595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.334622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.345769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.345796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.356183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.356210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.365971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.365998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.378849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.378876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.388456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.388484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.391 [2024-05-15 04:14:02.399706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.391 [2024-05-15 04:14:02.399733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.409655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.409682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.420855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.420889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.430730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.430758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.439959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.439987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.451235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.451262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.461501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.461528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.471635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.471663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.481466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.481493] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 00:13:14.650 Latency(us) 00:13:14.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.650 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:14.650 Nvme1n1 : 5.01 12194.92 95.27 0.00 0.00 10480.81 2754.94 22524.97 00:13:14.650 =================================================================================================================== 00:13:14.650 Total : 12194.92 95.27 0.00 0.00 10480.81 2754.94 22524.97 00:13:14.650 [2024-05-15 04:14:02.489397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.489427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.497415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.497445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.505433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.505460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.513521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.513572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.521534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.521582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.529553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.529601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.537578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.537627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.545608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.545663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.553629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.553681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.561649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.561699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.569677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.569727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.577698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.577747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.585722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.585773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.593728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.593779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.601752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.601811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.609771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.609822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.617794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.617842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.625761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.625788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.633781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.633806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.641802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.641828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.649822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.649847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.650 [2024-05-15 04:14:02.657848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.650 [2024-05-15 04:14:02.657877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.665925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.665980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.673967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.674015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.681912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.681947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.689944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.689984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.697960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.698009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.705997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.706019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.714027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.714061] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.722086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.722134] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.730109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.730151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.738074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.738095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.746094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.746115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 [2024-05-15 04:14:02.754115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.909 [2024-05-15 04:14:02.754138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3354690) - No such process 00:13:14.909 04:14:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3354690 00:13:14.909 04:14:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.909 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.909 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.909 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 delay0 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.910 04:14:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:14.910 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.910 [2024-05-15 04:14:02.881152] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:21.515 Initializing NVMe Controllers 00:13:21.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:21.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:21.515 Initialization complete. Launching workers. 00:13:21.515 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 58 00:13:21.515 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 345, failed to submit 33 00:13:21.515 success 121, unsuccess 224, failed 0 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.515 rmmod nvme_tcp 00:13:21.515 rmmod nvme_fabrics 00:13:21.515 rmmod nvme_keyring 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3353337 ']' 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3353337 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3353337 ']' 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3353337 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3353337 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3353337' 00:13:21.515 killing process with pid 3353337 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3353337 00:13:21.515 [2024-05-15 04:14:09.127035] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3353337 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.515 04:14:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.053 04:14:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.053 00:13:24.053 real 0m29.202s 00:13:24.053 user 0m42.714s 00:13:24.053 sys 0m8.621s 00:13:24.053 04:14:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.053 04:14:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:24.053 ************************************ 00:13:24.053 END TEST nvmf_zcopy 00:13:24.053 ************************************ 00:13:24.053 04:14:11 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:24.053 04:14:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:24.053 04:14:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.053 04:14:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.053 ************************************ 00:13:24.053 START TEST nvmf_nmic 00:13:24.053 ************************************ 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:24.053 * Looking for test storage... 00:13:24.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.053 04:14:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.054 04:14:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.589 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:26.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:26.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:26.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:26.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:13:26.590 00:13:26.590 --- 10.0.0.2 ping statistics --- 00:13:26.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.590 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:13:26.590 00:13:26.590 --- 10.0.0.1 ping statistics --- 00:13:26.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.590 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3358482 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3358482 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3358482 ']' 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:26.590 04:14:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.590 [2024-05-15 04:14:14.361366] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:26.590 [2024-05-15 04:14:14.361451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.590 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.590 [2024-05-15 04:14:14.444196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.590 [2024-05-15 04:14:14.562037] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.591 [2024-05-15 04:14:14.562100] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.591 [2024-05-15 04:14:14.562116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.591 [2024-05-15 04:14:14.562130] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.591 [2024-05-15 04:14:14.562141] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.591 [2024-05-15 04:14:14.562230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.591 [2024-05-15 04:14:14.562309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.591 [2024-05-15 04:14:14.562397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.591 [2024-05-15 04:14:14.562400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.522 [2024-05-15 04:14:15.329874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.522 Malloc0 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.522 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.523 [2024-05-15 04:14:15.382987] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:27.523 [2024-05-15 04:14:15.383306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:27.523 test case1: single bdev can't be used in multiple subsystems 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.523 [2024-05-15 04:14:15.407096] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:27.523 [2024-05-15 04:14:15.407125] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:27.523 [2024-05-15 04:14:15.407156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.523 request: 00:13:27.523 { 00:13:27.523 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:27.523 "namespace": { 00:13:27.523 "bdev_name": "Malloc0", 00:13:27.523 "no_auto_visible": false 00:13:27.523 }, 00:13:27.523 "method": "nvmf_subsystem_add_ns", 00:13:27.523 "req_id": 1 00:13:27.523 } 00:13:27.523 Got JSON-RPC error response 00:13:27.523 response: 00:13:27.523 { 00:13:27.523 "code": -32602, 00:13:27.523 "message": "Invalid parameters" 00:13:27.523 } 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:27.523 Adding namespace failed - expected result. 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:27.523 test case2: host connect to nvmf target in multiple paths 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.523 [2024-05-15 04:14:15.415223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.523 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.089 04:14:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:28.655 04:14:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.655 04:14:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:13:28.655 04:14:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.655 04:14:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:28.655 04:14:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:13:31.181 04:14:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:31.181 04:14:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:31.181 04:14:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.181 04:14:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:31.181 04:14:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.181 04:14:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:13:31.181 04:14:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:31.181 [global] 00:13:31.181 thread=1 00:13:31.181 invalidate=1 00:13:31.181 rw=write 00:13:31.181 time_based=1 00:13:31.181 runtime=1 00:13:31.181 ioengine=libaio 00:13:31.181 direct=1 00:13:31.181 bs=4096 00:13:31.181 iodepth=1 00:13:31.181 norandommap=0 00:13:31.181 numjobs=1 00:13:31.181 00:13:31.181 verify_dump=1 00:13:31.181 verify_backlog=512 00:13:31.181 verify_state_save=0 00:13:31.181 do_verify=1 00:13:31.181 verify=crc32c-intel 00:13:31.181 [job0] 00:13:31.181 filename=/dev/nvme0n1 00:13:31.181 Could not set queue depth (nvme0n1) 00:13:31.181 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:31.181 fio-3.35 00:13:31.181 Starting 1 thread 00:13:32.114 00:13:32.114 job0: (groupid=0, jobs=1): err= 0: pid=3359124: Wed May 15 04:14:19 2024 00:13:32.114 read: IOPS=18, BW=74.4KiB/s (76.2kB/s)(76.0KiB/1021msec) 00:13:32.114 slat (nsec): min=15039, max=35059, avg=20964.42, stdev=8345.41 00:13:32.114 clat (usec): min=40833, max=41989, avg=41175.05, stdev=412.27 00:13:32.114 lat (usec): min=40866, max=42006, avg=41196.01, stdev=414.00 00:13:32.114 clat percentiles (usec): 00:13:32.114 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:32.114 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:32.114 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:32.114 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:32.114 | 99.99th=[42206] 00:13:32.114 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:13:32.114 slat (usec): min=10, max=31732, avg=94.44, stdev=1401.01 00:13:32.114 clat (usec): min=229, max=578, avg=362.35, stdev=77.56 00:13:32.114 lat (usec): min=241, max=32141, avg=456.79, stdev=1405.63 00:13:32.114 clat percentiles (usec): 00:13:32.114 | 1.00th=[ 239], 5.00th=[ 255], 10.00th=[ 281], 20.00th=[ 297], 00:13:32.114 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 359], 60.00th=[ 375], 00:13:32.114 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 474], 95.00th=[ 519], 00:13:32.114 | 99.00th=[ 553], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 578], 00:13:32.114 | 99.99th=[ 578] 00:13:32.114 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:32.114 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:32.114 lat (usec) : 250=3.58%, 500=86.82%, 750=6.03% 00:13:32.114 lat (msec) : 50=3.58% 00:13:32.114 cpu : usr=0.69%, sys=2.35%, ctx=533, majf=0, minf=2 00:13:32.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.114 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.114 00:13:32.114 Run status group 0 (all jobs): 00:13:32.114 READ: bw=74.4KiB/s (76.2kB/s), 74.4KiB/s-74.4KiB/s (76.2kB/s-76.2kB/s), io=76.0KiB (77.8kB), run=1021-1021msec 00:13:32.114 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:13:32.114 00:13:32.114 Disk stats (read/write): 00:13:32.114 nvme0n1: ios=42/512, merge=0/0, ticks=1650/155, in_queue=1805, util=98.80% 00:13:32.114 04:14:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.114 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.114 rmmod nvme_tcp 00:13:32.372 rmmod nvme_fabrics 00:13:32.372 rmmod nvme_keyring 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3358482 ']' 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3358482 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3358482 ']' 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3358482 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3358482 00:13:32.372 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:32.373 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:32.373 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3358482' 00:13:32.373 killing process with pid 3358482 00:13:32.373 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3358482 00:13:32.373 [2024-05-15 04:14:20.193361] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:32.373 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3358482 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.632 04:14:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.164 04:14:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.164 00:13:35.164 real 0m11.031s 00:13:35.164 user 0m24.927s 00:13:35.164 sys 0m2.726s 00:13:35.164 04:14:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:35.164 04:14:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:35.164 ************************************ 00:13:35.164 END TEST nvmf_nmic 00:13:35.164 ************************************ 00:13:35.164 04:14:22 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:35.164 04:14:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:35.164 04:14:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.164 04:14:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.164 ************************************ 00:13:35.164 START TEST nvmf_fio_target 00:13:35.164 ************************************ 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:35.164 * Looking for test storage... 00:13:35.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.164 04:14:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:37.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:37.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:37.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.723 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:37.724 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:13:37.724 00:13:37.724 --- 10.0.0.2 ping statistics --- 00:13:37.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.724 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:13:37.724 00:13:37.724 --- 10.0.0.1 ping statistics --- 00:13:37.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.724 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3361493 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3361493 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3361493 ']' 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.724 [2024-05-15 04:14:25.352538] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:13:37.724 [2024-05-15 04:14:25.352625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.724 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.724 [2024-05-15 04:14:25.427980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.724 [2024-05-15 04:14:25.536584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.724 [2024-05-15 04:14:25.536629] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.724 [2024-05-15 04:14:25.536657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.724 [2024-05-15 04:14:25.536668] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.724 [2024-05-15 04:14:25.536677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.724 [2024-05-15 04:14:25.536773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.724 [2024-05-15 04:14:25.536833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.724 [2024-05-15 04:14:25.536902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.724 [2024-05-15 04:14:25.536905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.724 04:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:37.993 [2024-05-15 04:14:25.893221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.993 04:14:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.251 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:38.251 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.509 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:38.509 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.767 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:38.767 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.025 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:39.025 04:14:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:39.282 04:14:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.541 04:14:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:39.541 04:14:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.798 04:14:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:39.798 04:14:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.056 04:14:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:40.056 04:14:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:40.314 04:14:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:40.572 04:14:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:40.572 04:14:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.830 04:14:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:40.830 04:14:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:41.088 04:14:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.345 [2024-05-15 04:14:29.188777] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:41.345 [2024-05-15 04:14:29.189093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.345 04:14:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:41.603 04:14:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:41.861 04:14:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.427 04:14:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:42.427 04:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:13:42.427 04:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.427 04:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:13:42.427 04:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:13:42.427 04:14:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:13:44.324 04:14:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:44.324 04:14:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:44.324 04:14:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.582 04:14:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:13:44.582 04:14:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.582 04:14:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:13:44.582 04:14:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:44.582 [global] 00:13:44.582 thread=1 00:13:44.582 invalidate=1 00:13:44.582 rw=write 00:13:44.582 time_based=1 00:13:44.582 runtime=1 00:13:44.582 ioengine=libaio 00:13:44.582 direct=1 00:13:44.582 bs=4096 00:13:44.582 iodepth=1 00:13:44.582 norandommap=0 00:13:44.582 numjobs=1 00:13:44.582 00:13:44.582 verify_dump=1 00:13:44.582 verify_backlog=512 00:13:44.582 verify_state_save=0 00:13:44.582 do_verify=1 00:13:44.582 verify=crc32c-intel 00:13:44.582 [job0] 00:13:44.582 filename=/dev/nvme0n1 00:13:44.582 [job1] 00:13:44.582 filename=/dev/nvme0n2 00:13:44.582 [job2] 00:13:44.582 filename=/dev/nvme0n3 00:13:44.582 [job3] 00:13:44.582 filename=/dev/nvme0n4 00:13:44.582 Could not set queue depth (nvme0n1) 00:13:44.582 Could not set queue depth (nvme0n2) 00:13:44.582 Could not set queue depth (nvme0n3) 00:13:44.582 Could not set queue depth (nvme0n4) 00:13:44.582 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.582 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.582 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.582 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.582 fio-3.35 00:13:44.582 Starting 4 threads 00:13:45.953 00:13:45.953 job0: (groupid=0, jobs=1): err= 0: pid=3362563: Wed May 15 04:14:33 2024 00:13:45.953 read: IOPS=1211, BW=4847KiB/s (4963kB/s)(4852KiB/1001msec) 00:13:45.953 slat (nsec): min=5603, max=43987, avg=11699.90, stdev=4810.34 00:13:45.953 clat (usec): min=344, max=607, avg=412.74, stdev=46.35 00:13:45.953 lat (usec): min=351, max=614, avg=424.44, stdev=46.38 00:13:45.953 clat percentiles (usec): 00:13:45.953 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 375], 00:13:45.953 | 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 392], 60.00th=[ 400], 00:13:45.953 | 70.00th=[ 433], 80.00th=[ 469], 90.00th=[ 486], 95.00th=[ 494], 00:13:45.953 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 603], 99.95th=[ 611], 00:13:45.953 | 99.99th=[ 611] 00:13:45.953 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:45.953 slat (nsec): min=7173, max=69598, avg=15389.59, stdev=9721.07 00:13:45.953 clat (usec): min=218, max=550, avg=293.47, stdev=74.13 00:13:45.953 lat (usec): min=226, max=591, avg=308.86, stdev=79.80 00:13:45.953 clat percentiles (usec): 00:13:45.953 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:13:45.954 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 273], 00:13:45.954 | 70.00th=[ 297], 80.00th=[ 363], 90.00th=[ 416], 95.00th=[ 449], 00:13:45.954 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 545], 99.95th=[ 553], 00:13:45.954 | 99.99th=[ 553] 00:13:45.954 bw ( KiB/s): min= 6360, max= 6360, per=44.98%, avg=6360.00, stdev= 0.00, samples=1 00:13:45.954 iops : min= 1590, max= 1590, avg=1590.00, stdev= 0.00, samples=1 00:13:45.954 lat (usec) : 250=20.84%, 500=77.16%, 750=2.00% 00:13:45.954 cpu : usr=2.80%, sys=5.10%, ctx=2753, majf=0, minf=1 00:13:45.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 issued rwts: total=1213,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.954 job1: (groupid=0, jobs=1): err= 0: pid=3362565: Wed May 15 04:14:33 2024 00:13:45.954 read: IOPS=24, BW=98.7KiB/s (101kB/s)(100KiB/1013msec) 00:13:45.954 slat (nsec): min=5447, max=41260, avg=19988.72, stdev=9852.02 00:13:45.954 clat (usec): min=481, max=41121, avg=34617.76, stdev=14829.70 00:13:45.954 lat (usec): min=488, max=41156, avg=34637.74, stdev=14830.97 00:13:45.954 clat percentiles (usec): 00:13:45.954 | 1.00th=[ 482], 5.00th=[ 586], 10.00th=[ 701], 20.00th=[40633], 00:13:45.954 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:45.954 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:45.954 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:45.954 | 99.99th=[41157] 00:13:45.954 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:13:45.954 slat (nsec): min=6412, max=24165, avg=9342.31, stdev=3454.75 00:13:45.954 clat (usec): min=223, max=704, avg=273.94, stdev=52.26 00:13:45.954 lat (usec): min=230, max=714, avg=283.28, stdev=53.12 00:13:45.954 clat percentiles (usec): 00:13:45.954 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:13:45.954 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:13:45.954 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 355], 95.00th=[ 379], 00:13:45.954 | 99.00th=[ 433], 99.50th=[ 494], 99.90th=[ 709], 99.95th=[ 709], 00:13:45.954 | 99.99th=[ 709] 00:13:45.954 bw ( KiB/s): min= 4096, max= 4096, per=28.97%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.954 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.954 lat (usec) : 250=41.71%, 500=53.45%, 750=0.74% 00:13:45.954 lat (msec) : 4=0.19%, 50=3.91% 00:13:45.954 cpu : usr=0.10%, sys=0.69%, ctx=539, majf=0, minf=2 00:13:45.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.954 job2: (groupid=0, jobs=1): err= 0: pid=3362566: Wed May 15 04:14:33 2024 00:13:45.954 read: IOPS=514, BW=2059KiB/s (2109kB/s)(2088KiB/1014msec) 00:13:45.954 slat (nsec): min=5783, max=33729, avg=7997.99, stdev=3704.83 00:13:45.954 clat (usec): min=456, max=41880, avg=1281.12, stdev=5569.23 00:13:45.954 lat (usec): min=463, max=41895, avg=1289.12, stdev=5571.24 00:13:45.954 clat percentiles (usec): 00:13:45.954 | 1.00th=[ 461], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 478], 00:13:45.954 | 30.00th=[ 482], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 494], 00:13:45.954 | 70.00th=[ 506], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 603], 00:13:45.954 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:45.954 | 99.99th=[41681] 00:13:45.954 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:13:45.954 slat (nsec): min=7342, max=61113, avg=14507.82, stdev=7117.54 00:13:45.954 clat (usec): min=229, max=705, avg=313.20, stdev=85.13 00:13:45.954 lat (usec): min=241, max=726, avg=327.70, stdev=88.14 00:13:45.954 clat percentiles (usec): 00:13:45.954 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:13:45.954 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:13:45.954 | 70.00th=[ 322], 80.00th=[ 375], 90.00th=[ 445], 95.00th=[ 494], 00:13:45.954 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 693], 99.95th=[ 709], 00:13:45.954 | 99.99th=[ 709] 00:13:45.954 bw ( KiB/s): min= 1912, max= 6280, per=28.97%, avg=4096.00, stdev=3088.64, samples=2 00:13:45.954 iops : min= 478, max= 1570, avg=1024.00, stdev=772.16, samples=2 00:13:45.954 lat (usec) : 250=9.31%, 500=75.81%, 750=14.23% 00:13:45.954 lat (msec) : 50=0.65% 00:13:45.954 cpu : usr=1.38%, sys=2.17%, ctx=1546, majf=0, minf=1 00:13:45.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.954 job3: (groupid=0, jobs=1): err= 0: pid=3362567: Wed May 15 04:14:33 2024 00:13:45.954 read: IOPS=29, BW=120KiB/s (123kB/s)(120KiB/1003msec) 00:13:45.954 slat (nsec): min=8733, max=35944, avg=17289.43, stdev=5609.24 00:13:45.954 clat (usec): min=431, max=41491, avg=28850.06, stdev=18879.31 00:13:45.954 lat (usec): min=450, max=41502, avg=28867.35, stdev=18878.85 00:13:45.954 clat percentiles (usec): 00:13:45.954 | 1.00th=[ 433], 5.00th=[ 474], 10.00th=[ 478], 20.00th=[ 510], 00:13:45.954 | 30.00th=[ 529], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:45.954 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:45.954 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:45.954 | 99.99th=[41681] 00:13:45.954 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:13:45.954 slat (nsec): min=7675, max=33468, avg=9278.35, stdev=2790.21 00:13:45.954 clat (usec): min=229, max=744, avg=254.34, stdev=31.95 00:13:45.954 lat (usec): min=237, max=752, avg=263.62, stdev=32.56 00:13:45.954 clat percentiles (usec): 00:13:45.954 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 241], 00:13:45.954 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:13:45.954 | 70.00th=[ 258], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:13:45.954 | 99.00th=[ 404], 99.50th=[ 474], 99.90th=[ 742], 99.95th=[ 742], 00:13:45.954 | 99.99th=[ 742] 00:13:45.954 bw ( KiB/s): min= 4096, max= 4096, per=28.97%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.954 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.954 lat (usec) : 250=51.11%, 500=43.73%, 750=1.29% 00:13:45.954 lat (msec) : 50=3.87% 00:13:45.954 cpu : usr=0.30%, sys=0.70%, ctx=543, majf=0, minf=1 00:13:45.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.954 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.954 00:13:45.954 Run status group 0 (all jobs): 00:13:45.954 READ: bw=7061KiB/s (7231kB/s), 98.7KiB/s-4847KiB/s (101kB/s-4963kB/s), io=7160KiB (7332kB), run=1001-1014msec 00:13:45.954 WRITE: bw=13.8MiB/s (14.5MB/s), 2022KiB/s-6138KiB/s (2070kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1014msec 00:13:45.954 00:13:45.954 Disk stats (read/write): 00:13:45.954 nvme0n1: ios=1050/1300, merge=0/0, ticks=1398/366, in_queue=1764, util=97.70% 00:13:45.954 nvme0n2: ios=74/512, merge=0/0, ticks=1083/139, in_queue=1222, util=100.00% 00:13:45.954 nvme0n3: ios=571/1024, merge=0/0, ticks=602/303, in_queue=905, util=96.23% 00:13:45.954 nvme0n4: ios=26/512, merge=0/0, ticks=702/127, in_queue=829, util=89.56% 00:13:45.954 04:14:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:45.954 [global] 00:13:45.954 thread=1 00:13:45.954 invalidate=1 00:13:45.954 rw=randwrite 00:13:45.954 time_based=1 00:13:45.954 runtime=1 00:13:45.954 ioengine=libaio 00:13:45.954 direct=1 00:13:45.954 bs=4096 00:13:45.954 iodepth=1 00:13:45.954 norandommap=0 00:13:45.954 numjobs=1 00:13:45.954 00:13:45.954 verify_dump=1 00:13:45.954 verify_backlog=512 00:13:45.954 verify_state_save=0 00:13:45.954 do_verify=1 00:13:45.954 verify=crc32c-intel 00:13:45.954 [job0] 00:13:45.954 filename=/dev/nvme0n1 00:13:45.954 [job1] 00:13:45.954 filename=/dev/nvme0n2 00:13:45.954 [job2] 00:13:45.954 filename=/dev/nvme0n3 00:13:45.954 [job3] 00:13:45.954 filename=/dev/nvme0n4 00:13:45.954 Could not set queue depth (nvme0n1) 00:13:45.954 Could not set queue depth (nvme0n2) 00:13:45.954 Could not set queue depth (nvme0n3) 00:13:45.954 Could not set queue depth (nvme0n4) 00:13:46.212 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.212 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.212 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.212 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.212 fio-3.35 00:13:46.212 Starting 4 threads 00:13:47.582 00:13:47.582 job0: (groupid=0, jobs=1): err= 0: pid=3362793: Wed May 15 04:14:35 2024 00:13:47.582 read: IOPS=290, BW=1162KiB/s (1190kB/s)(1168KiB/1005msec) 00:13:47.582 slat (nsec): min=10098, max=60251, avg=33274.13, stdev=5676.85 00:13:47.582 clat (usec): min=491, max=41436, avg=2878.59, stdev=9196.32 00:13:47.582 lat (usec): min=525, max=41446, avg=2911.87, stdev=9193.43 00:13:47.582 clat percentiles (usec): 00:13:47.582 | 1.00th=[ 537], 5.00th=[ 603], 10.00th=[ 611], 20.00th=[ 619], 00:13:47.582 | 30.00th=[ 627], 40.00th=[ 627], 50.00th=[ 635], 60.00th=[ 644], 00:13:47.582 | 70.00th=[ 652], 80.00th=[ 725], 90.00th=[ 938], 95.00th=[41157], 00:13:47.582 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:47.583 | 99.99th=[41681] 00:13:47.583 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:13:47.583 slat (nsec): min=7407, max=32325, avg=10714.38, stdev=4409.18 00:13:47.583 clat (usec): min=233, max=480, avg=281.67, stdev=39.91 00:13:47.583 lat (usec): min=242, max=496, avg=292.39, stdev=41.20 00:13:47.583 clat percentiles (usec): 00:13:47.583 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:13:47.583 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:13:47.583 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 396], 00:13:47.583 | 99.00th=[ 416], 99.50th=[ 469], 99.90th=[ 482], 99.95th=[ 482], 00:13:47.583 | 99.99th=[ 482] 00:13:47.583 bw ( KiB/s): min= 4096, max= 4096, per=41.64%, avg=4096.00, stdev= 0.00, samples=1 00:13:47.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:47.583 lat (usec) : 250=7.21%, 500=56.59%, 750=30.35%, 1000=2.99% 00:13:47.583 lat (msec) : 2=0.87%, 50=1.99% 00:13:47.583 cpu : usr=0.90%, sys=1.79%, ctx=805, majf=0, minf=1 00:13:47.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 issued rwts: total=292,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.583 job1: (groupid=0, jobs=1): err= 0: pid=3362794: Wed May 15 04:14:35 2024 00:13:47.583 read: IOPS=299, BW=1199KiB/s (1228kB/s)(1248KiB/1041msec) 00:13:47.583 slat (nsec): min=5243, max=46986, avg=12762.00, stdev=7058.72 00:13:47.583 clat (usec): min=329, max=41150, avg=2739.28, stdev=9471.19 00:13:47.583 lat (usec): min=335, max=41163, avg=2752.04, stdev=9474.10 00:13:47.583 clat percentiles (usec): 00:13:47.583 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:13:47.583 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 404], 00:13:47.583 | 70.00th=[ 416], 80.00th=[ 441], 90.00th=[ 482], 95.00th=[40633], 00:13:47.583 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:47.583 | 99.99th=[41157] 00:13:47.583 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:13:47.583 slat (nsec): min=6610, max=37301, avg=12463.72, stdev=4712.82 00:13:47.583 clat (usec): min=219, max=598, avg=337.12, stdev=82.31 00:13:47.583 lat (usec): min=227, max=614, avg=349.58, stdev=84.13 00:13:47.583 clat percentiles (usec): 00:13:47.583 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:13:47.583 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 314], 60.00th=[ 379], 00:13:47.583 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 449], 95.00th=[ 474], 00:13:47.583 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 603], 99.95th=[ 603], 00:13:47.583 | 99.99th=[ 603] 00:13:47.583 bw ( KiB/s): min= 4096, max= 4096, per=41.64%, avg=4096.00, stdev= 0.00, samples=1 00:13:47.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:47.583 lat (usec) : 250=6.07%, 500=88.59%, 750=3.16% 00:13:47.583 lat (msec) : 50=2.18% 00:13:47.583 cpu : usr=0.29%, sys=1.25%, ctx=826, majf=0, minf=1 00:13:47.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 issued rwts: total=312,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.583 job2: (groupid=0, jobs=1): err= 0: pid=3362795: Wed May 15 04:14:35 2024 00:13:47.583 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:13:47.583 slat (nsec): min=13500, max=36038, avg=24390.23, stdev=10489.82 00:13:47.583 clat (usec): min=570, max=41240, avg=39141.59, stdev=8615.26 00:13:47.583 lat (usec): min=584, max=41254, avg=39165.98, stdev=8617.68 00:13:47.583 clat percentiles (usec): 00:13:47.583 | 1.00th=[ 570], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:47.583 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:47.583 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:47.583 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:47.583 | 99.99th=[41157] 00:13:47.583 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:13:47.583 slat (nsec): min=6712, max=30871, avg=11400.12, stdev=4975.89 00:13:47.583 clat (usec): min=222, max=516, avg=288.25, stdev=60.29 00:13:47.583 lat (usec): min=230, max=526, avg=299.65, stdev=61.17 00:13:47.583 clat percentiles (usec): 00:13:47.583 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:13:47.583 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:13:47.583 | 70.00th=[ 293], 80.00th=[ 334], 90.00th=[ 383], 95.00th=[ 408], 00:13:47.583 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 519], 00:13:47.583 | 99.99th=[ 519] 00:13:47.583 bw ( KiB/s): min= 4096, max= 4096, per=41.64%, avg=4096.00, stdev= 0.00, samples=1 00:13:47.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:47.583 lat (usec) : 250=30.71%, 500=64.61%, 750=0.75% 00:13:47.583 lat (msec) : 50=3.93% 00:13:47.583 cpu : usr=0.00%, sys=0.79%, ctx=535, majf=0, minf=2 00:13:47.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.583 job3: (groupid=0, jobs=1): err= 0: pid=3362796: Wed May 15 04:14:35 2024 00:13:47.583 read: IOPS=986, BW=3944KiB/s (4039kB/s)(3948KiB/1001msec) 00:13:47.583 slat (nsec): min=6064, max=67779, avg=22631.02, stdev=11185.51 00:13:47.583 clat (usec): min=455, max=41297, avg=666.13, stdev=2227.30 00:13:47.583 lat (usec): min=467, max=41311, avg=688.76, stdev=2227.00 00:13:47.583 clat percentiles (usec): 00:13:47.583 | 1.00th=[ 465], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 498], 00:13:47.583 | 30.00th=[ 506], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[ 537], 00:13:47.583 | 70.00th=[ 553], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 652], 00:13:47.583 | 99.00th=[ 676], 99.50th=[ 947], 99.90th=[41157], 99.95th=[41157], 00:13:47.583 | 99.99th=[41157] 00:13:47.583 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:47.583 slat (nsec): min=6305, max=59472, avg=13481.58, stdev=5006.99 00:13:47.583 clat (usec): min=229, max=536, avg=289.65, stdev=61.64 00:13:47.583 lat (usec): min=239, max=545, avg=303.13, stdev=60.14 00:13:47.583 clat percentiles (usec): 00:13:47.583 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:13:47.583 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 273], 00:13:47.583 | 70.00th=[ 289], 80.00th=[ 347], 90.00th=[ 396], 95.00th=[ 424], 00:13:47.583 | 99.00th=[ 465], 99.50th=[ 490], 99.90th=[ 523], 99.95th=[ 537], 00:13:47.583 | 99.99th=[ 537] 00:13:47.583 bw ( KiB/s): min= 4096, max= 4096, per=41.64%, avg=4096.00, stdev= 0.00, samples=1 00:13:47.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:47.583 lat (usec) : 250=16.46%, 500=46.39%, 750=36.85%, 1000=0.10% 00:13:47.583 lat (msec) : 2=0.05%, 50=0.15% 00:13:47.583 cpu : usr=1.70%, sys=4.00%, ctx=2014, majf=0, minf=1 00:13:47.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.583 issued rwts: total=987,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.583 00:13:47.583 Run status group 0 (all jobs): 00:13:47.583 READ: bw=6198KiB/s (6347kB/s), 86.5KiB/s-3944KiB/s (88.6kB/s-4039kB/s), io=6452KiB (6607kB), run=1001-1041msec 00:13:47.583 WRITE: bw=9837KiB/s (10.1MB/s), 1967KiB/s-4092KiB/s (2015kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1041msec 00:13:47.583 00:13:47.583 Disk stats (read/write): 00:13:47.583 nvme0n1: ios=175/512, merge=0/0, ticks=892/141, in_queue=1033, util=91.78% 00:13:47.583 nvme0n2: ios=347/512, merge=0/0, ticks=1612/170, in_queue=1782, util=98.27% 00:13:47.583 nvme0n3: ios=60/512, merge=0/0, ticks=1675/149, in_queue=1824, util=98.23% 00:13:47.583 nvme0n4: ios=780/1024, merge=0/0, ticks=1457/296, in_queue=1753, util=96.95% 00:13:47.583 04:14:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:47.583 [global] 00:13:47.583 thread=1 00:13:47.583 invalidate=1 00:13:47.583 rw=write 00:13:47.583 time_based=1 00:13:47.583 runtime=1 00:13:47.583 ioengine=libaio 00:13:47.583 direct=1 00:13:47.583 bs=4096 00:13:47.583 iodepth=128 00:13:47.583 norandommap=0 00:13:47.583 numjobs=1 00:13:47.583 00:13:47.583 verify_dump=1 00:13:47.583 verify_backlog=512 00:13:47.583 verify_state_save=0 00:13:47.583 do_verify=1 00:13:47.583 verify=crc32c-intel 00:13:47.583 [job0] 00:13:47.583 filename=/dev/nvme0n1 00:13:47.583 [job1] 00:13:47.583 filename=/dev/nvme0n2 00:13:47.583 [job2] 00:13:47.583 filename=/dev/nvme0n3 00:13:47.584 [job3] 00:13:47.584 filename=/dev/nvme0n4 00:13:47.584 Could not set queue depth (nvme0n1) 00:13:47.584 Could not set queue depth (nvme0n2) 00:13:47.584 Could not set queue depth (nvme0n3) 00:13:47.584 Could not set queue depth (nvme0n4) 00:13:47.584 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.584 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.584 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.584 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.584 fio-3.35 00:13:47.584 Starting 4 threads 00:13:48.955 00:13:48.955 job0: (groupid=0, jobs=1): err= 0: pid=3363039: Wed May 15 04:14:36 2024 00:13:48.955 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:13:48.955 slat (usec): min=3, max=19976, avg=166.15, stdev=1121.95 00:13:48.955 clat (msec): min=8, max=122, avg=19.18, stdev=13.98 00:13:48.955 lat (msec): min=9, max=122, avg=19.34, stdev=14.13 00:13:48.955 clat percentiles (msec): 00:13:48.955 | 1.00th=[ 11], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 13], 00:13:48.955 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 17], 00:13:48.955 | 70.00th=[ 21], 80.00th=[ 22], 90.00th=[ 25], 95.00th=[ 35], 00:13:48.955 | 99.00th=[ 101], 99.50th=[ 117], 99.90th=[ 123], 99.95th=[ 123], 00:13:48.955 | 99.99th=[ 123] 00:13:48.955 write: IOPS=2911, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1008msec); 0 zone resets 00:13:48.955 slat (usec): min=4, max=18865, avg=184.89, stdev=1115.19 00:13:48.955 clat (msec): min=2, max=122, avg=26.83, stdev=28.79 00:13:48.955 lat (msec): min=2, max=122, avg=27.02, stdev=28.97 00:13:48.955 clat percentiles (msec): 00:13:48.955 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:13:48.955 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:13:48.955 | 70.00th=[ 23], 80.00th=[ 31], 90.00th=[ 92], 95.00th=[ 103], 00:13:48.956 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 123], 00:13:48.956 | 99.99th=[ 123] 00:13:48.956 bw ( KiB/s): min= 7728, max=14736, per=21.29%, avg=11232.00, stdev=4955.40, samples=2 00:13:48.956 iops : min= 1932, max= 3684, avg=2808.00, stdev=1238.85, samples=2 00:13:48.956 lat (msec) : 4=0.56%, 10=6.95%, 20=60.33%, 50=23.88%, 100=5.08% 00:13:48.956 lat (msec) : 250=3.20% 00:13:48.956 cpu : usr=4.17%, sys=6.06%, ctx=225, majf=0, minf=1 00:13:48.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.956 issued rwts: total=2560,2935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.956 job1: (groupid=0, jobs=1): err= 0: pid=3363049: Wed May 15 04:14:36 2024 00:13:48.956 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:13:48.956 slat (usec): min=2, max=10589, avg=108.96, stdev=626.21 00:13:48.956 clat (usec): min=6894, max=72623, avg=13201.87, stdev=7011.23 00:13:48.956 lat (usec): min=6899, max=72634, avg=13310.83, stdev=7081.95 00:13:48.956 clat percentiles (usec): 00:13:48.956 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10683], 00:13:48.956 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:13:48.956 | 70.00th=[12911], 80.00th=[13698], 90.00th=[15270], 95.00th=[17433], 00:13:48.956 | 99.00th=[64226], 99.50th=[64226], 99.90th=[72877], 99.95th=[72877], 00:13:48.956 | 99.99th=[72877] 00:13:48.956 write: IOPS=3546, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1002msec); 0 zone resets 00:13:48.956 slat (usec): min=3, max=34239, avg=180.03, stdev=1325.41 00:13:48.956 clat (usec): min=436, max=96015, avg=24273.92, stdev=16658.39 00:13:48.956 lat (usec): min=3483, max=96020, avg=24453.96, stdev=16744.78 00:13:48.956 clat percentiles (usec): 00:13:48.956 | 1.00th=[ 7373], 5.00th=[11076], 10.00th=[12256], 20.00th=[13566], 00:13:48.956 | 30.00th=[15008], 40.00th=[16581], 50.00th=[18220], 60.00th=[18744], 00:13:48.956 | 70.00th=[23200], 80.00th=[34866], 90.00th=[45351], 95.00th=[61080], 00:13:48.956 | 99.00th=[87557], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:13:48.956 | 99.99th=[95945] 00:13:48.956 bw ( KiB/s): min=12920, max=14488, per=25.98%, avg=13704.00, stdev=1108.74, samples=2 00:13:48.956 iops : min= 3230, max= 3622, avg=3426.00, stdev=277.19, samples=2 00:13:48.956 lat (usec) : 500=0.02% 00:13:48.956 lat (msec) : 4=0.33%, 10=6.14%, 20=73.80%, 50=15.77%, 100=3.94% 00:13:48.956 cpu : usr=2.80%, sys=4.70%, ctx=387, majf=0, minf=1 00:13:48.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.956 issued rwts: total=3072,3554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.956 job2: (groupid=0, jobs=1): err= 0: pid=3363089: Wed May 15 04:14:36 2024 00:13:48.956 read: IOPS=3564, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1009msec) 00:13:48.956 slat (usec): min=2, max=27004, avg=133.86, stdev=1043.06 00:13:48.956 clat (usec): min=863, max=80905, avg=17544.68, stdev=11994.97 00:13:48.956 lat (usec): min=5590, max=80920, avg=17678.53, stdev=12052.79 00:13:48.956 clat percentiles (usec): 00:13:48.956 | 1.00th=[ 6652], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[10683], 00:13:48.956 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13173], 60.00th=[13829], 00:13:48.956 | 70.00th=[16188], 80.00th=[20841], 90.00th=[32113], 95.00th=[44303], 00:13:48.956 | 99.00th=[64226], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:13:48.956 | 99.99th=[81265] 00:13:48.956 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:13:48.956 slat (usec): min=3, max=17491, avg=118.73, stdev=863.44 00:13:48.956 clat (usec): min=859, max=50076, avg=15750.20, stdev=8549.93 00:13:48.956 lat (usec): min=881, max=53517, avg=15868.93, stdev=8607.76 00:13:48.956 clat percentiles (usec): 00:13:48.956 | 1.00th=[ 2540], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[10552], 00:13:48.956 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13173], 60.00th=[14222], 00:13:48.956 | 70.00th=[15664], 80.00th=[19006], 90.00th=[26346], 95.00th=[34866], 00:13:48.956 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:13:48.956 | 99.99th=[50070] 00:13:48.956 bw ( KiB/s): min=14064, max=17784, per=30.19%, avg=15924.00, stdev=2630.44, samples=2 00:13:48.956 iops : min= 3516, max= 4446, avg=3981.00, stdev=657.61, samples=2 00:13:48.956 lat (usec) : 1000=0.04% 00:13:48.956 lat (msec) : 2=0.21%, 4=0.43%, 10=13.48%, 20=66.52%, 50=17.29% 00:13:48.956 lat (msec) : 100=2.04% 00:13:48.956 cpu : usr=2.48%, sys=5.85%, ctx=280, majf=0, minf=1 00:13:48.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.956 issued rwts: total=3597,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.956 job3: (groupid=0, jobs=1): err= 0: pid=3363097: Wed May 15 04:14:36 2024 00:13:48.956 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:13:48.956 slat (usec): min=2, max=41001, avg=201.16, stdev=1784.60 00:13:48.956 clat (msec): min=3, max=127, avg=28.42, stdev=28.24 00:13:48.956 lat (msec): min=3, max=127, avg=28.62, stdev=28.40 00:13:48.956 clat percentiles (msec): 00:13:48.956 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:13:48.956 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 20], 00:13:48.956 | 70.00th=[ 23], 80.00th=[ 29], 90.00th=[ 73], 95.00th=[ 104], 00:13:48.956 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 128], 00:13:48.956 | 99.99th=[ 128] 00:13:48.956 write: IOPS=2699, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1008msec); 0 zone resets 00:13:48.956 slat (usec): min=3, max=31950, avg=165.01, stdev=1405.61 00:13:48.956 clat (usec): min=1689, max=66489, avg=20156.39, stdev=14339.59 00:13:48.956 lat (usec): min=1696, max=72913, avg=20321.41, stdev=14454.37 00:13:48.956 clat percentiles (usec): 00:13:48.956 | 1.00th=[ 3720], 5.00th=[ 6456], 10.00th=[ 8717], 20.00th=[10290], 00:13:48.956 | 30.00th=[11600], 40.00th=[12911], 50.00th=[15008], 60.00th=[16581], 00:13:48.956 | 70.00th=[19268], 80.00th=[30016], 90.00th=[42730], 95.00th=[55313], 00:13:48.956 | 99.00th=[65799], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:13:48.956 | 99.99th=[66323] 00:13:48.956 bw ( KiB/s): min= 4360, max=16384, per=19.66%, avg=10372.00, stdev=8502.25, samples=2 00:13:48.956 iops : min= 1090, max= 4096, avg=2593.00, stdev=2125.56, samples=2 00:13:48.956 lat (msec) : 2=0.13%, 4=0.66%, 10=8.50%, 20=58.25%, 50=21.78% 00:13:48.956 lat (msec) : 100=7.16%, 250=3.52% 00:13:48.956 cpu : usr=2.28%, sys=3.87%, ctx=137, majf=0, minf=1 00:13:48.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.956 issued rwts: total=2560,2721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.956 00:13:48.956 Run status group 0 (all jobs): 00:13:48.956 READ: bw=45.6MiB/s (47.9MB/s), 9.92MiB/s-13.9MiB/s (10.4MB/s-14.6MB/s), io=46.1MiB (48.3MB), run=1002-1009msec 00:13:48.956 WRITE: bw=51.5MiB/s (54.0MB/s), 10.5MiB/s-15.9MiB/s (11.1MB/s-16.6MB/s), io=52.0MiB (54.5MB), run=1002-1009msec 00:13:48.956 00:13:48.956 Disk stats (read/write): 00:13:48.956 nvme0n1: ios=1987/2048, merge=0/0, ticks=37532/65837, in_queue=103369, util=87.37% 00:13:48.956 nvme0n2: ios=2610/2764, merge=0/0, ticks=9732/22585, in_queue=32317, util=96.95% 00:13:48.956 nvme0n3: ios=3130/3477, merge=0/0, ticks=27092/27303, in_queue=54395, util=96.34% 00:13:48.956 nvme0n4: ios=2486/2560, merge=0/0, ticks=40205/43050, in_queue=83255, util=97.46% 00:13:48.956 04:14:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:48.956 [global] 00:13:48.956 thread=1 00:13:48.956 invalidate=1 00:13:48.956 rw=randwrite 00:13:48.956 time_based=1 00:13:48.956 runtime=1 00:13:48.956 ioengine=libaio 00:13:48.956 direct=1 00:13:48.956 bs=4096 00:13:48.956 iodepth=128 00:13:48.956 norandommap=0 00:13:48.956 numjobs=1 00:13:48.956 00:13:48.956 verify_dump=1 00:13:48.956 verify_backlog=512 00:13:48.956 verify_state_save=0 00:13:48.956 do_verify=1 00:13:48.956 verify=crc32c-intel 00:13:48.956 [job0] 00:13:48.956 filename=/dev/nvme0n1 00:13:48.956 [job1] 00:13:48.956 filename=/dev/nvme0n2 00:13:48.956 [job2] 00:13:48.956 filename=/dev/nvme0n3 00:13:48.956 [job3] 00:13:48.956 filename=/dev/nvme0n4 00:13:48.956 Could not set queue depth (nvme0n1) 00:13:48.956 Could not set queue depth (nvme0n2) 00:13:48.956 Could not set queue depth (nvme0n3) 00:13:48.956 Could not set queue depth (nvme0n4) 00:13:49.215 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.215 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.215 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.215 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.215 fio-3.35 00:13:49.215 Starting 4 threads 00:13:50.589 00:13:50.589 job0: (groupid=0, jobs=1): err= 0: pid=3363375: Wed May 15 04:14:38 2024 00:13:50.589 read: IOPS=4289, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1002msec) 00:13:50.589 slat (usec): min=2, max=48483, avg=107.30, stdev=937.09 00:13:50.589 clat (usec): min=656, max=76097, avg=13263.33, stdev=9381.07 00:13:50.589 lat (usec): min=5643, max=76121, avg=13370.62, stdev=9437.58 00:13:50.589 clat percentiles (usec): 00:13:50.589 | 1.00th=[ 6980], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9634], 00:13:50.589 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11469], 60.00th=[11863], 00:13:50.589 | 70.00th=[12256], 80.00th=[13173], 90.00th=[16909], 95.00th=[20317], 00:13:50.589 | 99.00th=[69731], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:13:50.589 | 99.99th=[76022] 00:13:50.589 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:50.589 slat (usec): min=3, max=27692, avg=111.28, stdev=655.05 00:13:50.589 clat (usec): min=5244, max=61791, avg=15155.97, stdev=6882.89 00:13:50.589 lat (usec): min=5250, max=61823, avg=15267.25, stdev=6898.67 00:13:50.589 clat percentiles (usec): 00:13:50.589 | 1.00th=[ 6849], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[11076], 00:13:50.589 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12649], 60.00th=[13960], 00:13:50.589 | 70.00th=[16712], 80.00th=[18744], 90.00th=[22414], 95.00th=[26084], 00:13:50.589 | 99.00th=[40633], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:13:50.589 | 99.99th=[61604] 00:13:50.589 bw ( KiB/s): min=16384, max=20480, per=29.43%, avg=18432.00, stdev=2896.31, samples=2 00:13:50.589 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:13:50.589 lat (usec) : 750=0.01% 00:13:50.589 lat (msec) : 10=21.26%, 20=68.52%, 50=8.79%, 100=1.43% 00:13:50.589 cpu : usr=2.60%, sys=5.59%, ctx=455, majf=0, minf=1 00:13:50.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:50.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.589 issued rwts: total=4298,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.589 job1: (groupid=0, jobs=1): err= 0: pid=3363376: Wed May 15 04:14:38 2024 00:13:50.589 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:13:50.589 slat (usec): min=2, max=35082, avg=180.10, stdev=1386.73 00:13:50.589 clat (msec): min=9, max=108, avg=23.70, stdev=20.58 00:13:50.589 lat (msec): min=9, max=108, avg=23.88, stdev=20.69 00:13:50.589 clat percentiles (msec): 00:13:50.589 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:13:50.589 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 18], 00:13:50.589 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 39], 95.00th=[ 75], 00:13:50.589 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:13:50.589 | 99.99th=[ 109] 00:13:50.589 write: IOPS=2990, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1002msec); 0 zone resets 00:13:50.589 slat (usec): min=4, max=11303, avg=172.59, stdev=787.98 00:13:50.589 clat (usec): min=323, max=52498, avg=21454.17, stdev=10062.84 00:13:50.589 lat (usec): min=2905, max=52525, avg=21626.77, stdev=10111.24 00:13:50.589 clat percentiles (usec): 00:13:50.589 | 1.00th=[ 3687], 5.00th=[11076], 10.00th=[12911], 20.00th=[13435], 00:13:50.589 | 30.00th=[15139], 40.00th=[16188], 50.00th=[17171], 60.00th=[20055], 00:13:50.589 | 70.00th=[23725], 80.00th=[29492], 90.00th=[38011], 95.00th=[41681], 00:13:50.589 | 99.00th=[48497], 99.50th=[49546], 99.90th=[52167], 99.95th=[52691], 00:13:50.589 | 99.99th=[52691] 00:13:50.589 bw ( KiB/s): min= 8192, max=14752, per=18.32%, avg=11472.00, stdev=4638.62, samples=2 00:13:50.589 iops : min= 2048, max= 3688, avg=2868.00, stdev=1159.66, samples=2 00:13:50.589 lat (usec) : 500=0.02% 00:13:50.589 lat (msec) : 4=0.59%, 10=1.33%, 20=67.73%, 50=26.17%, 100=3.04% 00:13:50.589 lat (msec) : 250=1.12% 00:13:50.589 cpu : usr=3.70%, sys=4.00%, ctx=297, majf=0, minf=1 00:13:50.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:50.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.589 issued rwts: total=2560,2996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.589 job2: (groupid=0, jobs=1): err= 0: pid=3363379: Wed May 15 04:14:38 2024 00:13:50.589 read: IOPS=3496, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1009msec) 00:13:50.589 slat (usec): min=3, max=19005, avg=132.37, stdev=967.52 00:13:50.589 clat (usec): min=6649, max=54285, avg=17082.98, stdev=6168.31 00:13:50.589 lat (usec): min=6681, max=54292, avg=17215.36, stdev=6226.62 00:13:50.589 clat percentiles (usec): 00:13:50.589 | 1.00th=[ 8717], 5.00th=[10814], 10.00th=[11207], 20.00th=[12911], 00:13:50.589 | 30.00th=[14222], 40.00th=[14877], 50.00th=[15270], 60.00th=[16188], 00:13:50.589 | 70.00th=[17957], 80.00th=[20579], 90.00th=[24773], 95.00th=[27657], 00:13:50.589 | 99.00th=[45351], 99.50th=[49546], 99.90th=[54264], 99.95th=[54264], 00:13:50.589 | 99.99th=[54264] 00:13:50.589 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:13:50.589 slat (usec): min=5, max=13912, avg=136.09, stdev=757.01 00:13:50.589 clat (usec): min=3944, max=54277, avg=18872.70, stdev=8228.38 00:13:50.589 lat (usec): min=3953, max=54289, avg=19008.79, stdev=8269.80 00:13:50.589 clat percentiles (usec): 00:13:50.589 | 1.00th=[ 5997], 5.00th=[ 8455], 10.00th=[10552], 20.00th=[12387], 00:13:50.589 | 30.00th=[14746], 40.00th=[16057], 50.00th=[16581], 60.00th=[17433], 00:13:50.589 | 70.00th=[19792], 80.00th=[24511], 90.00th=[34866], 95.00th=[37487], 00:13:50.589 | 99.00th=[39584], 99.50th=[40633], 99.90th=[41157], 99.95th=[54264], 00:13:50.589 | 99.99th=[54264] 00:13:50.589 bw ( KiB/s): min=12304, max=16368, per=22.89%, avg=14336.00, stdev=2873.68, samples=2 00:13:50.589 iops : min= 3076, max= 4092, avg=3584.00, stdev=718.42, samples=2 00:13:50.589 lat (msec) : 4=0.04%, 10=4.88%, 20=69.76%, 50=25.11%, 100=0.21% 00:13:50.589 cpu : usr=3.57%, sys=6.94%, ctx=368, majf=0, minf=1 00:13:50.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:50.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.590 issued rwts: total=3528,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.590 job3: (groupid=0, jobs=1): err= 0: pid=3363380: Wed May 15 04:14:38 2024 00:13:50.590 read: IOPS=4318, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1007msec) 00:13:50.590 slat (usec): min=2, max=25842, avg=112.35, stdev=898.71 00:13:50.590 clat (usec): min=2797, max=61265, avg=15398.16, stdev=5522.64 00:13:50.590 lat (usec): min=5635, max=65775, avg=15510.51, stdev=5574.15 00:13:50.590 clat percentiles (usec): 00:13:50.590 | 1.00th=[ 6521], 5.00th=[10421], 10.00th=[10683], 20.00th=[11863], 00:13:50.590 | 30.00th=[12911], 40.00th=[13829], 50.00th=[14484], 60.00th=[14877], 00:13:50.590 | 70.00th=[15533], 80.00th=[17171], 90.00th=[21627], 95.00th=[25560], 00:13:50.590 | 99.00th=[33817], 99.50th=[33817], 99.90th=[61080], 99.95th=[61080], 00:13:50.590 | 99.99th=[61080] 00:13:50.590 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:13:50.590 slat (usec): min=3, max=13802, avg=94.53, stdev=654.06 00:13:50.590 clat (usec): min=1032, max=37351, avg=13178.08, stdev=5167.87 00:13:50.590 lat (usec): min=1038, max=37363, avg=13272.60, stdev=5192.67 00:13:50.590 clat percentiles (usec): 00:13:50.590 | 1.00th=[ 3851], 5.00th=[ 5997], 10.00th=[ 8029], 20.00th=[ 9634], 00:13:50.590 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12256], 60.00th=[13173], 00:13:50.590 | 70.00th=[14484], 80.00th=[16909], 90.00th=[19530], 95.00th=[21103], 00:13:50.590 | 99.00th=[33162], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:13:50.590 | 99.99th=[37487] 00:13:50.590 bw ( KiB/s): min=18376, max=18488, per=29.43%, avg=18432.00, stdev=79.20, samples=2 00:13:50.590 iops : min= 4594, max= 4622, avg=4608.00, stdev=19.80, samples=2 00:13:50.590 lat (msec) : 2=0.20%, 4=0.38%, 10=13.55%, 20=75.75%, 50=9.93% 00:13:50.590 lat (msec) : 100=0.19% 00:13:50.590 cpu : usr=3.08%, sys=5.57%, ctx=305, majf=0, minf=1 00:13:50.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:50.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.590 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.590 00:13:50.590 Run status group 0 (all jobs): 00:13:50.590 READ: bw=57.0MiB/s (59.8MB/s), 9.98MiB/s-16.9MiB/s (10.5MB/s-17.7MB/s), io=57.6MiB (60.4MB), run=1002-1009msec 00:13:50.590 WRITE: bw=61.2MiB/s (64.1MB/s), 11.7MiB/s-18.0MiB/s (12.2MB/s-18.8MB/s), io=61.7MiB (64.7MB), run=1002-1009msec 00:13:50.590 00:13:50.590 Disk stats (read/write): 00:13:50.590 nvme0n1: ios=4117/4098, merge=0/0, ticks=21544/23876, in_queue=45420, util=96.99% 00:13:50.590 nvme0n2: ios=2066/2260, merge=0/0, ticks=14177/14660, in_queue=28837, util=97.97% 00:13:50.590 nvme0n3: ios=2855/3072, merge=0/0, ticks=48918/56682, in_queue=105600, util=97.29% 00:13:50.590 nvme0n4: ios=3606/4040, merge=0/0, ticks=44716/37341, in_queue=82057, util=97.90% 00:13:50.590 04:14:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:50.590 04:14:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3363516 00:13:50.590 04:14:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:50.590 04:14:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:50.590 [global] 00:13:50.590 thread=1 00:13:50.590 invalidate=1 00:13:50.590 rw=read 00:13:50.590 time_based=1 00:13:50.590 runtime=10 00:13:50.590 ioengine=libaio 00:13:50.590 direct=1 00:13:50.590 bs=4096 00:13:50.590 iodepth=1 00:13:50.590 norandommap=1 00:13:50.590 numjobs=1 00:13:50.590 00:13:50.590 [job0] 00:13:50.590 filename=/dev/nvme0n1 00:13:50.590 [job1] 00:13:50.590 filename=/dev/nvme0n2 00:13:50.590 [job2] 00:13:50.590 filename=/dev/nvme0n3 00:13:50.590 [job3] 00:13:50.590 filename=/dev/nvme0n4 00:13:50.590 Could not set queue depth (nvme0n1) 00:13:50.590 Could not set queue depth (nvme0n2) 00:13:50.590 Could not set queue depth (nvme0n3) 00:13:50.590 Could not set queue depth (nvme0n4) 00:13:50.590 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.590 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.590 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.590 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.590 fio-3.35 00:13:50.590 Starting 4 threads 00:13:53.931 04:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:53.931 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=294912, buflen=4096 00:13:53.931 fio: pid=3363614, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:53.931 04:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:53.931 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=7229440, buflen=4096 00:13:53.931 fio: pid=3363613, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:53.931 04:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.931 04:14:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:54.189 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=348160, buflen=4096 00:13:54.189 fio: pid=3363611, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:54.189 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.189 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:54.447 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=3375104, buflen=4096 00:13:54.447 fio: pid=3363612, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:54.447 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.447 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:54.447 00:13:54.447 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3363611: Wed May 15 04:14:42 2024 00:13:54.447 read: IOPS=25, BW=99.2KiB/s (102kB/s)(340KiB/3427msec) 00:13:54.447 slat (usec): min=13, max=7831, avg=203.98, stdev=1181.60 00:13:54.447 clat (usec): min=575, max=43960, avg=40096.87, stdev=6181.69 00:13:54.447 lat (usec): min=591, max=48981, avg=40303.09, stdev=6323.33 00:13:54.447 clat percentiles (usec): 00:13:54.447 | 1.00th=[ 578], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:54.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:54.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:54.447 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:54.447 | 99.99th=[43779] 00:13:54.447 bw ( KiB/s): min= 96, max= 104, per=3.36%, avg=100.00, stdev= 4.38, samples=6 00:13:54.447 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:13:54.447 lat (usec) : 750=2.33% 00:13:54.447 lat (msec) : 50=96.51% 00:13:54.447 cpu : usr=0.00%, sys=0.12%, ctx=89, majf=0, minf=1 00:13:54.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.447 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3363612: Wed May 15 04:14:42 2024 00:13:54.447 read: IOPS=223, BW=893KiB/s (915kB/s)(3296KiB/3690msec) 00:13:54.447 slat (usec): min=4, max=11820, avg=48.09, stdev=594.84 00:13:54.447 clat (usec): min=334, max=42988, avg=4426.54, stdev=12170.49 00:13:54.447 lat (usec): min=339, max=52731, avg=4474.66, stdev=12248.51 00:13:54.447 clat percentiles (usec): 00:13:54.447 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:13:54.447 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 388], 00:13:54.447 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 709], 95.00th=[41157], 00:13:54.447 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:13:54.447 | 99.99th=[42730] 00:13:54.447 bw ( KiB/s): min= 96, max= 4286, per=25.36%, avg=755.14, stdev=1564.26, samples=7 00:13:54.447 iops : min= 24, max= 1071, avg=188.71, stdev=390.88, samples=7 00:13:54.447 lat (usec) : 500=86.55%, 750=3.39% 00:13:54.447 lat (msec) : 50=9.94% 00:13:54.447 cpu : usr=0.14%, sys=0.27%, ctx=831, majf=0, minf=1 00:13:54.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 issued rwts: total=825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.447 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3363613: Wed May 15 04:14:42 2024 00:13:54.447 read: IOPS=559, BW=2235KiB/s (2289kB/s)(7060KiB/3159msec) 00:13:54.447 slat (nsec): min=5521, max=38537, avg=8790.04, stdev=4467.04 00:13:54.447 clat (usec): min=361, max=41494, avg=1778.40, stdev=7354.74 00:13:54.447 lat (usec): min=367, max=41506, avg=1787.19, stdev=7357.69 00:13:54.447 clat percentiles (usec): 00:13:54.447 | 1.00th=[ 367], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[ 379], 00:13:54.447 | 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 392], 60.00th=[ 396], 00:13:54.447 | 70.00th=[ 400], 80.00th=[ 408], 90.00th=[ 449], 95.00th=[ 510], 00:13:54.447 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:13:54.447 | 99.99th=[41681] 00:13:54.447 bw ( KiB/s): min= 96, max= 5696, per=78.88%, avg=2348.00, stdev=2158.84, samples=6 00:13:54.447 iops : min= 24, max= 1424, avg=587.00, stdev=539.71, samples=6 00:13:54.447 lat (usec) : 500=94.62%, 750=1.87% 00:13:54.447 lat (msec) : 4=0.06%, 50=3.40% 00:13:54.447 cpu : usr=0.32%, sys=0.73%, ctx=1766, majf=0, minf=1 00:13:54.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 issued rwts: total=1766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.447 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3363614: Wed May 15 04:14:42 2024 00:13:54.447 read: IOPS=25, BW=98.8KiB/s (101kB/s)(288KiB/2915msec) 00:13:54.447 slat (nsec): min=13174, max=41987, avg=24883.86, stdev=9803.49 00:13:54.447 clat (usec): min=685, max=42968, avg=40447.71, stdev=4758.83 00:13:54.447 lat (usec): min=715, max=42988, avg=40472.45, stdev=4758.10 00:13:54.447 clat percentiles (usec): 00:13:54.447 | 1.00th=[ 685], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:54.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:54.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:54.447 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:54.447 | 99.99th=[42730] 00:13:54.447 bw ( KiB/s): min= 96, max= 104, per=3.26%, avg=97.60, stdev= 3.58, samples=5 00:13:54.447 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:13:54.447 lat (usec) : 750=1.37% 00:13:54.447 lat (msec) : 50=97.26% 00:13:54.447 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=1 00:13:54.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.447 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.447 00:13:54.447 Run status group 0 (all jobs): 00:13:54.447 READ: bw=2977KiB/s (3048kB/s), 98.8KiB/s-2235KiB/s (101kB/s-2289kB/s), io=10.7MiB (11.2MB), run=2915-3690msec 00:13:54.447 00:13:54.447 Disk stats (read/write): 00:13:54.447 nvme0n1: ios=125/0, merge=0/0, ticks=4428/0, in_queue=4428, util=99.03% 00:13:54.447 nvme0n2: ios=789/0, merge=0/0, ticks=3725/0, in_queue=3725, util=99.52% 00:13:54.447 nvme0n3: ios=1763/0, merge=0/0, ticks=3041/0, in_queue=3041, util=96.79% 00:13:54.447 nvme0n4: ios=70/0, merge=0/0, ticks=2833/0, in_queue=2833, util=96.75% 00:13:54.705 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.705 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:54.963 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.963 04:14:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:55.220 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.220 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:55.493 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.493 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3363516 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:55.757 nvmf hotplug test: fio failed as expected 00:13:55.757 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.014 04:14:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.014 rmmod nvme_tcp 00:13:56.014 rmmod nvme_fabrics 00:13:56.014 rmmod nvme_keyring 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3361493 ']' 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3361493 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3361493 ']' 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3361493 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:56.014 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3361493 00:13:56.272 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:56.272 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:56.272 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3361493' 00:13:56.272 killing process with pid 3361493 00:13:56.272 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3361493 00:13:56.272 [2024-05-15 04:14:44.050147] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:56.272 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3361493 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.531 04:14:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.432 04:14:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.432 00:13:58.432 real 0m23.772s 00:13:58.432 user 1m20.182s 00:13:58.432 sys 0m6.697s 00:13:58.432 04:14:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.432 04:14:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.432 ************************************ 00:13:58.432 END TEST nvmf_fio_target 00:13:58.432 ************************************ 00:13:58.432 04:14:46 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:58.432 04:14:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:58.432 04:14:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.432 04:14:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.432 ************************************ 00:13:58.432 START TEST nvmf_bdevio 00:13:58.432 ************************************ 00:13:58.432 04:14:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:58.691 * Looking for test storage... 00:13:58.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.691 04:14:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:01.223 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:01.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:01.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:01.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.223 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.481 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:01.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:14:01.481 00:14:01.481 --- 10.0.0.2 ping statistics --- 00:14:01.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.481 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:14:01.481 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:14:01.481 00:14:01.481 --- 10.0.0.1 ping statistics --- 00:14:01.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.481 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:14:01.481 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.481 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:01.481 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.481 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3366639 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3366639 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3366639 ']' 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:01.482 04:14:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.482 [2024-05-15 04:14:49.323995] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:01.482 [2024-05-15 04:14:49.324070] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.482 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.482 [2024-05-15 04:14:49.406154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.740 [2024-05-15 04:14:49.529466] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.740 [2024-05-15 04:14:49.529525] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.740 [2024-05-15 04:14:49.529541] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.740 [2024-05-15 04:14:49.529555] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.740 [2024-05-15 04:14:49.529567] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.740 [2024-05-15 04:14:49.529660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:01.740 [2024-05-15 04:14:49.529716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:01.740 [2024-05-15 04:14:49.529773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:01.740 [2024-05-15 04:14:49.529776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.304 [2024-05-15 04:14:50.297674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.304 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.562 Malloc0 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.562 [2024-05-15 04:14:50.349952] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:02.562 [2024-05-15 04:14:50.350274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:02.562 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:02.562 { 00:14:02.562 "params": { 00:14:02.562 "name": "Nvme$subsystem", 00:14:02.562 "trtype": "$TEST_TRANSPORT", 00:14:02.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.562 "adrfam": "ipv4", 00:14:02.562 "trsvcid": "$NVMF_PORT", 00:14:02.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.562 "hdgst": ${hdgst:-false}, 00:14:02.562 "ddgst": ${ddgst:-false} 00:14:02.562 }, 00:14:02.562 "method": "bdev_nvme_attach_controller" 00:14:02.562 } 00:14:02.562 EOF 00:14:02.562 )") 00:14:02.563 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:02.563 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:02.563 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:02.563 04:14:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:02.563 "params": { 00:14:02.563 "name": "Nvme1", 00:14:02.563 "trtype": "tcp", 00:14:02.563 "traddr": "10.0.0.2", 00:14:02.563 "adrfam": "ipv4", 00:14:02.563 "trsvcid": "4420", 00:14:02.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.563 "hdgst": false, 00:14:02.563 "ddgst": false 00:14:02.563 }, 00:14:02.563 "method": "bdev_nvme_attach_controller" 00:14:02.563 }' 00:14:02.563 [2024-05-15 04:14:50.396265] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:14:02.563 [2024-05-15 04:14:50.396336] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366796 ] 00:14:02.563 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.563 [2024-05-15 04:14:50.467322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.821 [2024-05-15 04:14:50.584175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.821 [2024-05-15 04:14:50.584224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.821 [2024-05-15 04:14:50.584228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.078 I/O targets: 00:14:03.078 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:03.078 00:14:03.078 00:14:03.078 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.078 http://cunit.sourceforge.net/ 00:14:03.078 00:14:03.078 00:14:03.078 Suite: bdevio tests on: Nvme1n1 00:14:03.078 Test: blockdev write read block ...passed 00:14:03.078 Test: blockdev write zeroes read block ...passed 00:14:03.078 Test: blockdev write zeroes read no split ...passed 00:14:03.078 Test: blockdev write zeroes read split ...passed 00:14:03.335 Test: blockdev write zeroes read split partial ...passed 00:14:03.335 Test: blockdev reset ...[2024-05-15 04:14:51.136889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:03.335 [2024-05-15 04:14:51.137001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d79f0 (9): Bad file descriptor 00:14:03.335 [2024-05-15 04:14:51.273115] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:03.335 passed 00:14:03.335 Test: blockdev write read 8 blocks ...passed 00:14:03.335 Test: blockdev write read size > 128k ...passed 00:14:03.335 Test: blockdev write read invalid size ...passed 00:14:03.335 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:03.335 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:03.335 Test: blockdev write read max offset ...passed 00:14:03.593 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:03.593 Test: blockdev writev readv 8 blocks ...passed 00:14:03.593 Test: blockdev writev readv 30 x 1block ...passed 00:14:03.593 Test: blockdev writev readv block ...passed 00:14:03.593 Test: blockdev writev readv size > 128k ...passed 00:14:03.593 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:03.593 Test: blockdev comparev and writev ...[2024-05-15 04:14:51.492233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.492277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.492302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.492320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.492730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.492755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.492778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.492793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.493230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.493254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.493276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.493291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.493707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.493731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.493753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.593 [2024-05-15 04:14:51.493768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:03.593 passed 00:14:03.593 Test: blockdev nvme passthru rw ...passed 00:14:03.593 Test: blockdev nvme passthru vendor specific ...[2024-05-15 04:14:51.577371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.593 [2024-05-15 04:14:51.577398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.577626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.593 [2024-05-15 04:14:51.577649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.577874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.593 [2024-05-15 04:14:51.577897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:03.593 [2024-05-15 04:14:51.578133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.593 [2024-05-15 04:14:51.578157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:03.593 passed 00:14:03.593 Test: blockdev nvme admin passthru ...passed 00:14:03.851 Test: blockdev copy ...passed 00:14:03.851 00:14:03.851 Run Summary: Type Total Ran Passed Failed Inactive 00:14:03.851 suites 1 1 n/a 0 0 00:14:03.851 tests 23 23 23 0 0 00:14:03.851 asserts 152 152 152 0 n/a 00:14:03.851 00:14:03.851 Elapsed time = 1.433 seconds 00:14:04.108 04:14:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.109 rmmod nvme_tcp 00:14:04.109 rmmod nvme_fabrics 00:14:04.109 rmmod nvme_keyring 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3366639 ']' 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3366639 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3366639 ']' 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3366639 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3366639 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3366639' 00:14:04.109 killing process with pid 3366639 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3366639 00:14:04.109 [2024-05-15 04:14:51.967522] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:04.109 04:14:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3366639 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.368 04:14:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.902 04:14:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.902 00:14:06.902 real 0m7.891s 00:14:06.902 user 0m14.787s 00:14:06.902 sys 0m2.568s 00:14:06.902 04:14:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:06.902 04:14:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.902 ************************************ 00:14:06.902 END TEST nvmf_bdevio 00:14:06.902 ************************************ 00:14:06.902 04:14:54 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.902 04:14:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:06.902 04:14:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:06.902 04:14:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.902 ************************************ 00:14:06.902 START TEST nvmf_auth_target 00:14:06.902 ************************************ 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.902 * Looking for test storage... 00:14:06.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.902 04:14:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.903 04:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:09.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:09.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:09.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:09.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:14:09.434 00:14:09.434 --- 10.0.0.2 ping statistics --- 00:14:09.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.434 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:09.434 00:14:09.434 --- 10.0.0.1 ping statistics --- 00:14:09.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.434 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.434 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3369169 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3369169 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3369169 ']' 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.435 04:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=3369305 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9b558f56e2d1a269ac18c2f884e75b4e7d674a8fdeba0f8b 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.l7s 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9b558f56e2d1a269ac18c2f884e75b4e7d674a8fdeba0f8b 0 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9b558f56e2d1a269ac18c2f884e75b4e7d674a8fdeba0f8b 0 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9b558f56e2d1a269ac18c2f884e75b4e7d674a8fdeba0f8b 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.l7s 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.l7s 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.l7s 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b2298d71c188260a1b54f27ef4f5a39b 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tQ4 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b2298d71c188260a1b54f27ef4f5a39b 1 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b2298d71c188260a1b54f27ef4f5a39b 1 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b2298d71c188260a1b54f27ef4f5a39b 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:09.435 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tQ4 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tQ4 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.tQ4 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fcabd48edb5208f9a74ad92df13ab78ecfdcfa189c5bb2a1 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vTz 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fcabd48edb5208f9a74ad92df13ab78ecfdcfa189c5bb2a1 2 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fcabd48edb5208f9a74ad92df13ab78ecfdcfa189c5bb2a1 2 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fcabd48edb5208f9a74ad92df13ab78ecfdcfa189c5bb2a1 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vTz 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vTz 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.vTz 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3f75ba6135295dab5c6eafa859c40dab7b9cd0cda0460f51f583d72b7aa053c3 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gvn 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3f75ba6135295dab5c6eafa859c40dab7b9cd0cda0460f51f583d72b7aa053c3 3 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3f75ba6135295dab5c6eafa859c40dab7b9cd0cda0460f51f583d72b7aa053c3 3 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3f75ba6135295dab5c6eafa859c40dab7b9cd0cda0460f51f583d72b7aa053c3 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gvn 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gvn 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.gvn 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 3369169 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3369169 ']' 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.693 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 3369305 /var/tmp/host.sock 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3369305 ']' 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:09.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.979 04:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.l7s 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.l7s 00:14:10.236 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.l7s 00:14:10.494 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:14:10.494 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tQ4 00:14:10.494 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.494 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.494 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.494 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tQ4 00:14:10.494 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tQ4 00:14:10.753 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:14:10.753 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vTz 00:14:10.753 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.753 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.753 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.753 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vTz 00:14:10.753 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vTz 00:14:11.010 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:14:11.010 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gvn 00:14:11.010 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.010 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.010 04:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.010 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gvn 00:14:11.010 04:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gvn 00:14:11.267 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:14:11.267 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.267 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:11.267 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:11.267 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:11.524 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:11.781 00:14:11.781 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:11.781 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:11.781 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:12.038 { 00:14:12.038 "cntlid": 1, 00:14:12.038 "qid": 0, 00:14:12.038 "state": "enabled", 00:14:12.038 "listen_address": { 00:14:12.038 "trtype": "TCP", 00:14:12.038 "adrfam": "IPv4", 00:14:12.038 "traddr": "10.0.0.2", 00:14:12.038 "trsvcid": "4420" 00:14:12.038 }, 00:14:12.038 "peer_address": { 00:14:12.038 "trtype": "TCP", 00:14:12.038 "adrfam": "IPv4", 00:14:12.038 "traddr": "10.0.0.1", 00:14:12.038 "trsvcid": "36742" 00:14:12.038 }, 00:14:12.038 "auth": { 00:14:12.038 "state": "completed", 00:14:12.038 "digest": "sha256", 00:14:12.038 "dhgroup": "null" 00:14:12.038 } 00:14:12.038 } 00:14:12.038 ]' 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.038 04:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:12.038 04:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:12.038 04:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:12.038 04:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.038 04:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.038 04:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.604 04:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:13.536 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:14.100 00:14:14.100 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:14.100 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:14.100 04:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.100 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.100 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.100 04:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.100 04:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.100 04:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.100 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:14.100 { 00:14:14.100 "cntlid": 3, 00:14:14.100 "qid": 0, 00:14:14.100 "state": "enabled", 00:14:14.100 "listen_address": { 00:14:14.100 "trtype": "TCP", 00:14:14.100 "adrfam": "IPv4", 00:14:14.100 "traddr": "10.0.0.2", 00:14:14.100 "trsvcid": "4420" 00:14:14.100 }, 00:14:14.100 "peer_address": { 00:14:14.100 "trtype": "TCP", 00:14:14.100 "adrfam": "IPv4", 00:14:14.100 "traddr": "10.0.0.1", 00:14:14.100 "trsvcid": "47134" 00:14:14.100 }, 00:14:14.100 "auth": { 00:14:14.100 "state": "completed", 00:14:14.100 "digest": "sha256", 00:14:14.100 "dhgroup": "null" 00:14:14.100 } 00:14:14.100 } 00:14:14.100 ]' 00:14:14.100 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:14.356 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.356 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:14.356 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:14.356 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:14.356 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.356 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.356 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.613 04:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.544 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:15.800 04:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:16.057 00:14:16.057 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:16.057 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:16.057 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:16.315 { 00:14:16.315 "cntlid": 5, 00:14:16.315 "qid": 0, 00:14:16.315 "state": "enabled", 00:14:16.315 "listen_address": { 00:14:16.315 "trtype": "TCP", 00:14:16.315 "adrfam": "IPv4", 00:14:16.315 "traddr": "10.0.0.2", 00:14:16.315 "trsvcid": "4420" 00:14:16.315 }, 00:14:16.315 "peer_address": { 00:14:16.315 "trtype": "TCP", 00:14:16.315 "adrfam": "IPv4", 00:14:16.315 "traddr": "10.0.0.1", 00:14:16.315 "trsvcid": "47170" 00:14:16.315 }, 00:14:16.315 "auth": { 00:14:16.315 "state": "completed", 00:14:16.315 "digest": "sha256", 00:14:16.315 "dhgroup": "null" 00:14:16.315 } 00:14:16.315 } 00:14:16.315 ]' 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.315 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:16.573 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:16.573 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:16.573 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.573 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.573 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.830 04:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.762 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.020 04:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.278 00:14:18.278 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:18.278 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.278 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:18.536 { 00:14:18.536 "cntlid": 7, 00:14:18.536 "qid": 0, 00:14:18.536 "state": "enabled", 00:14:18.536 "listen_address": { 00:14:18.536 "trtype": "TCP", 00:14:18.536 "adrfam": "IPv4", 00:14:18.536 "traddr": "10.0.0.2", 00:14:18.536 "trsvcid": "4420" 00:14:18.536 }, 00:14:18.536 "peer_address": { 00:14:18.536 "trtype": "TCP", 00:14:18.536 "adrfam": "IPv4", 00:14:18.536 "traddr": "10.0.0.1", 00:14:18.536 "trsvcid": "47194" 00:14:18.536 }, 00:14:18.536 "auth": { 00:14:18.536 "state": "completed", 00:14:18.536 "digest": "sha256", 00:14:18.536 "dhgroup": "null" 00:14:18.536 } 00:14:18.536 } 00:14:18.536 ]' 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:18.536 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:18.800 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.800 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.800 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.058 04:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.991 04:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.991 04:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.991 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:19.991 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:20.557 00:14:20.557 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:20.557 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.557 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:20.816 { 00:14:20.816 "cntlid": 9, 00:14:20.816 "qid": 0, 00:14:20.816 "state": "enabled", 00:14:20.816 "listen_address": { 00:14:20.816 "trtype": "TCP", 00:14:20.816 "adrfam": "IPv4", 00:14:20.816 "traddr": "10.0.0.2", 00:14:20.816 "trsvcid": "4420" 00:14:20.816 }, 00:14:20.816 "peer_address": { 00:14:20.816 "trtype": "TCP", 00:14:20.816 "adrfam": "IPv4", 00:14:20.816 "traddr": "10.0.0.1", 00:14:20.816 "trsvcid": "47216" 00:14:20.816 }, 00:14:20.816 "auth": { 00:14:20.816 "state": "completed", 00:14:20.816 "digest": "sha256", 00:14:20.816 "dhgroup": "ffdhe2048" 00:14:20.816 } 00:14:20.816 } 00:14:20.816 ]' 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.816 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.074 04:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.006 04:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:22.263 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:22.521 00:14:22.779 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:22.779 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:22.779 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:23.037 { 00:14:23.037 "cntlid": 11, 00:14:23.037 "qid": 0, 00:14:23.037 "state": "enabled", 00:14:23.037 "listen_address": { 00:14:23.037 "trtype": "TCP", 00:14:23.037 "adrfam": "IPv4", 00:14:23.037 "traddr": "10.0.0.2", 00:14:23.037 "trsvcid": "4420" 00:14:23.037 }, 00:14:23.037 "peer_address": { 00:14:23.037 "trtype": "TCP", 00:14:23.037 "adrfam": "IPv4", 00:14:23.037 "traddr": "10.0.0.1", 00:14:23.037 "trsvcid": "53220" 00:14:23.037 }, 00:14:23.037 "auth": { 00:14:23.037 "state": "completed", 00:14:23.037 "digest": "sha256", 00:14:23.037 "dhgroup": "ffdhe2048" 00:14:23.037 } 00:14:23.037 } 00:14:23.037 ]' 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.037 04:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.321 04:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.253 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:24.512 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:24.768 00:14:24.768 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:24.768 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.768 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:25.025 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.025 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.025 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.025 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.025 04:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.025 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:25.025 { 00:14:25.025 "cntlid": 13, 00:14:25.025 "qid": 0, 00:14:25.025 "state": "enabled", 00:14:25.025 "listen_address": { 00:14:25.025 "trtype": "TCP", 00:14:25.025 "adrfam": "IPv4", 00:14:25.025 "traddr": "10.0.0.2", 00:14:25.025 "trsvcid": "4420" 00:14:25.025 }, 00:14:25.025 "peer_address": { 00:14:25.025 "trtype": "TCP", 00:14:25.025 "adrfam": "IPv4", 00:14:25.025 "traddr": "10.0.0.1", 00:14:25.025 "trsvcid": "53252" 00:14:25.025 }, 00:14:25.025 "auth": { 00:14:25.025 "state": "completed", 00:14:25.025 "digest": "sha256", 00:14:25.025 "dhgroup": "ffdhe2048" 00:14:25.025 } 00:14:25.025 } 00:14:25.025 ]' 00:14:25.025 04:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:25.025 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.025 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:25.282 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.282 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:25.282 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.282 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.282 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.540 04:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.474 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.732 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.990 00:14:26.990 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:26.990 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:26.990 04:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:27.249 { 00:14:27.249 "cntlid": 15, 00:14:27.249 "qid": 0, 00:14:27.249 "state": "enabled", 00:14:27.249 "listen_address": { 00:14:27.249 "trtype": "TCP", 00:14:27.249 "adrfam": "IPv4", 00:14:27.249 "traddr": "10.0.0.2", 00:14:27.249 "trsvcid": "4420" 00:14:27.249 }, 00:14:27.249 "peer_address": { 00:14:27.249 "trtype": "TCP", 00:14:27.249 "adrfam": "IPv4", 00:14:27.249 "traddr": "10.0.0.1", 00:14:27.249 "trsvcid": "53274" 00:14:27.249 }, 00:14:27.249 "auth": { 00:14:27.249 "state": "completed", 00:14:27.249 "digest": "sha256", 00:14:27.249 "dhgroup": "ffdhe2048" 00:14:27.249 } 00:14:27.249 } 00:14:27.249 ]' 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.249 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:27.507 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:27.507 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:27.507 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.507 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.507 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.765 04:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.697 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.955 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:14:28.955 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:28.955 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:28.956 04:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:29.214 00:14:29.214 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:29.214 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.214 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:29.472 { 00:14:29.472 "cntlid": 17, 00:14:29.472 "qid": 0, 00:14:29.472 "state": "enabled", 00:14:29.472 "listen_address": { 00:14:29.472 "trtype": "TCP", 00:14:29.472 "adrfam": "IPv4", 00:14:29.472 "traddr": "10.0.0.2", 00:14:29.472 "trsvcid": "4420" 00:14:29.472 }, 00:14:29.472 "peer_address": { 00:14:29.472 "trtype": "TCP", 00:14:29.472 "adrfam": "IPv4", 00:14:29.472 "traddr": "10.0.0.1", 00:14:29.472 "trsvcid": "53288" 00:14:29.472 }, 00:14:29.472 "auth": { 00:14:29.472 "state": "completed", 00:14:29.472 "digest": "sha256", 00:14:29.472 "dhgroup": "ffdhe3072" 00:14:29.472 } 00:14:29.472 } 00:14:29.472 ]' 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.472 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:29.730 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.730 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.730 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.730 04:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.102 04:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.102 04:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.102 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:31.102 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:31.360 00:14:31.617 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:31.617 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:31.617 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:31.875 { 00:14:31.875 "cntlid": 19, 00:14:31.875 "qid": 0, 00:14:31.875 "state": "enabled", 00:14:31.875 "listen_address": { 00:14:31.875 "trtype": "TCP", 00:14:31.875 "adrfam": "IPv4", 00:14:31.875 "traddr": "10.0.0.2", 00:14:31.875 "trsvcid": "4420" 00:14:31.875 }, 00:14:31.875 "peer_address": { 00:14:31.875 "trtype": "TCP", 00:14:31.875 "adrfam": "IPv4", 00:14:31.875 "traddr": "10.0.0.1", 00:14:31.875 "trsvcid": "53310" 00:14:31.875 }, 00:14:31.875 "auth": { 00:14:31.875 "state": "completed", 00:14:31.875 "digest": "sha256", 00:14:31.875 "dhgroup": "ffdhe3072" 00:14:31.875 } 00:14:31.875 } 00:14:31.875 ]' 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.875 04:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.133 04:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.066 04:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.324 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:33.325 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:33.583 00:14:33.583 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:33.583 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:33.583 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.841 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.841 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.841 04:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.841 04:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.841 04:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.841 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:33.841 { 00:14:33.841 "cntlid": 21, 00:14:33.841 "qid": 0, 00:14:33.841 "state": "enabled", 00:14:33.841 "listen_address": { 00:14:33.841 "trtype": "TCP", 00:14:33.841 "adrfam": "IPv4", 00:14:33.841 "traddr": "10.0.0.2", 00:14:33.841 "trsvcid": "4420" 00:14:33.841 }, 00:14:33.841 "peer_address": { 00:14:33.841 "trtype": "TCP", 00:14:33.841 "adrfam": "IPv4", 00:14:33.841 "traddr": "10.0.0.1", 00:14:33.841 "trsvcid": "60694" 00:14:33.841 }, 00:14:33.841 "auth": { 00:14:33.841 "state": "completed", 00:14:33.841 "digest": "sha256", 00:14:33.841 "dhgroup": "ffdhe3072" 00:14:33.841 } 00:14:33.841 } 00:14:33.841 ]' 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.099 04:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.356 04:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:35.289 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:35.547 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.111 00:14:36.111 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:36.111 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:36.111 04:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:36.370 { 00:14:36.370 "cntlid": 23, 00:14:36.370 "qid": 0, 00:14:36.370 "state": "enabled", 00:14:36.370 "listen_address": { 00:14:36.370 "trtype": "TCP", 00:14:36.370 "adrfam": "IPv4", 00:14:36.370 "traddr": "10.0.0.2", 00:14:36.370 "trsvcid": "4420" 00:14:36.370 }, 00:14:36.370 "peer_address": { 00:14:36.370 "trtype": "TCP", 00:14:36.370 "adrfam": "IPv4", 00:14:36.370 "traddr": "10.0.0.1", 00:14:36.370 "trsvcid": "60726" 00:14:36.370 }, 00:14:36.370 "auth": { 00:14:36.370 "state": "completed", 00:14:36.370 "digest": "sha256", 00:14:36.370 "dhgroup": "ffdhe3072" 00:14:36.370 } 00:14:36.370 } 00:14:36.370 ]' 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.370 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.627 04:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:14:37.602 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.603 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:37.860 04:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:38.427 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:38.427 { 00:14:38.427 "cntlid": 25, 00:14:38.427 "qid": 0, 00:14:38.427 "state": "enabled", 00:14:38.427 "listen_address": { 00:14:38.427 "trtype": "TCP", 00:14:38.427 "adrfam": "IPv4", 00:14:38.427 "traddr": "10.0.0.2", 00:14:38.427 "trsvcid": "4420" 00:14:38.427 }, 00:14:38.427 "peer_address": { 00:14:38.427 "trtype": "TCP", 00:14:38.427 "adrfam": "IPv4", 00:14:38.427 "traddr": "10.0.0.1", 00:14:38.427 "trsvcid": "60758" 00:14:38.427 }, 00:14:38.427 "auth": { 00:14:38.427 "state": "completed", 00:14:38.427 "digest": "sha256", 00:14:38.427 "dhgroup": "ffdhe4096" 00:14:38.427 } 00:14:38.427 } 00:14:38.427 ]' 00:14:38.427 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:38.685 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.685 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:38.685 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.685 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:38.685 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.685 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.685 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.943 04:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.878 04:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:40.136 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:40.703 00:14:40.703 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:40.703 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:40.703 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:40.961 { 00:14:40.961 "cntlid": 27, 00:14:40.961 "qid": 0, 00:14:40.961 "state": "enabled", 00:14:40.961 "listen_address": { 00:14:40.961 "trtype": "TCP", 00:14:40.961 "adrfam": "IPv4", 00:14:40.961 "traddr": "10.0.0.2", 00:14:40.961 "trsvcid": "4420" 00:14:40.961 }, 00:14:40.961 "peer_address": { 00:14:40.961 "trtype": "TCP", 00:14:40.961 "adrfam": "IPv4", 00:14:40.961 "traddr": "10.0.0.1", 00:14:40.961 "trsvcid": "60788" 00:14:40.961 }, 00:14:40.961 "auth": { 00:14:40.961 "state": "completed", 00:14:40.961 "digest": "sha256", 00:14:40.961 "dhgroup": "ffdhe4096" 00:14:40.961 } 00:14:40.961 } 00:14:40.961 ]' 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:40.961 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.962 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:40.962 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.962 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.962 04:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.220 04:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.152 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:42.411 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:42.976 00:14:42.976 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:42.976 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:42.977 04:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:43.235 { 00:14:43.235 "cntlid": 29, 00:14:43.235 "qid": 0, 00:14:43.235 "state": "enabled", 00:14:43.235 "listen_address": { 00:14:43.235 "trtype": "TCP", 00:14:43.235 "adrfam": "IPv4", 00:14:43.235 "traddr": "10.0.0.2", 00:14:43.235 "trsvcid": "4420" 00:14:43.235 }, 00:14:43.235 "peer_address": { 00:14:43.235 "trtype": "TCP", 00:14:43.235 "adrfam": "IPv4", 00:14:43.235 "traddr": "10.0.0.1", 00:14:43.235 "trsvcid": "45042" 00:14:43.235 }, 00:14:43.235 "auth": { 00:14:43.235 "state": "completed", 00:14:43.235 "digest": "sha256", 00:14:43.235 "dhgroup": "ffdhe4096" 00:14:43.235 } 00:14:43.235 } 00:14:43.235 ]' 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.235 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.493 04:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.428 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.687 04:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.254 00:14:45.254 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:45.254 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:45.254 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.512 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.512 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:45.513 { 00:14:45.513 "cntlid": 31, 00:14:45.513 "qid": 0, 00:14:45.513 "state": "enabled", 00:14:45.513 "listen_address": { 00:14:45.513 "trtype": "TCP", 00:14:45.513 "adrfam": "IPv4", 00:14:45.513 "traddr": "10.0.0.2", 00:14:45.513 "trsvcid": "4420" 00:14:45.513 }, 00:14:45.513 "peer_address": { 00:14:45.513 "trtype": "TCP", 00:14:45.513 "adrfam": "IPv4", 00:14:45.513 "traddr": "10.0.0.1", 00:14:45.513 "trsvcid": "45072" 00:14:45.513 }, 00:14:45.513 "auth": { 00:14:45.513 "state": "completed", 00:14:45.513 "digest": "sha256", 00:14:45.513 "dhgroup": "ffdhe4096" 00:14:45.513 } 00:14:45.513 } 00:14:45.513 ]' 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.513 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.771 04:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.704 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:46.963 04:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:47.528 00:14:47.528 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:47.528 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:47.529 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:47.786 { 00:14:47.786 "cntlid": 33, 00:14:47.786 "qid": 0, 00:14:47.786 "state": "enabled", 00:14:47.786 "listen_address": { 00:14:47.786 "trtype": "TCP", 00:14:47.786 "adrfam": "IPv4", 00:14:47.786 "traddr": "10.0.0.2", 00:14:47.786 "trsvcid": "4420" 00:14:47.786 }, 00:14:47.786 "peer_address": { 00:14:47.786 "trtype": "TCP", 00:14:47.786 "adrfam": "IPv4", 00:14:47.786 "traddr": "10.0.0.1", 00:14:47.786 "trsvcid": "45106" 00:14:47.786 }, 00:14:47.786 "auth": { 00:14:47.786 "state": "completed", 00:14:47.786 "digest": "sha256", 00:14:47.786 "dhgroup": "ffdhe6144" 00:14:47.786 } 00:14:47.786 } 00:14:47.786 ]' 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.786 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:48.044 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.044 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.044 04:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.302 04:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:14:49.236 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.236 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.236 04:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.237 04:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.237 04:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.237 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:49.237 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.237 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:49.495 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:50.062 00:14:50.062 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:50.062 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:50.062 04:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:50.337 { 00:14:50.337 "cntlid": 35, 00:14:50.337 "qid": 0, 00:14:50.337 "state": "enabled", 00:14:50.337 "listen_address": { 00:14:50.337 "trtype": "TCP", 00:14:50.337 "adrfam": "IPv4", 00:14:50.337 "traddr": "10.0.0.2", 00:14:50.337 "trsvcid": "4420" 00:14:50.337 }, 00:14:50.337 "peer_address": { 00:14:50.337 "trtype": "TCP", 00:14:50.337 "adrfam": "IPv4", 00:14:50.337 "traddr": "10.0.0.1", 00:14:50.337 "trsvcid": "45134" 00:14:50.337 }, 00:14:50.337 "auth": { 00:14:50.337 "state": "completed", 00:14:50.337 "digest": "sha256", 00:14:50.337 "dhgroup": "ffdhe6144" 00:14:50.337 } 00:14:50.337 } 00:14:50.337 ]' 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.337 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.595 04:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.563 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.821 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:51.822 04:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:52.388 00:14:52.388 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:52.388 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:52.388 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:52.647 { 00:14:52.647 "cntlid": 37, 00:14:52.647 "qid": 0, 00:14:52.647 "state": "enabled", 00:14:52.647 "listen_address": { 00:14:52.647 "trtype": "TCP", 00:14:52.647 "adrfam": "IPv4", 00:14:52.647 "traddr": "10.0.0.2", 00:14:52.647 "trsvcid": "4420" 00:14:52.647 }, 00:14:52.647 "peer_address": { 00:14:52.647 "trtype": "TCP", 00:14:52.647 "adrfam": "IPv4", 00:14:52.647 "traddr": "10.0.0.1", 00:14:52.647 "trsvcid": "45164" 00:14:52.647 }, 00:14:52.647 "auth": { 00:14:52.647 "state": "completed", 00:14:52.647 "digest": "sha256", 00:14:52.647 "dhgroup": "ffdhe6144" 00:14:52.647 } 00:14:52.647 } 00:14:52.647 ]' 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.647 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:52.905 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.905 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:52.905 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.905 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.905 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.163 04:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.097 04:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:54.355 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:54.963 00:14:54.963 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:54.963 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:54.963 04:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:55.221 { 00:14:55.221 "cntlid": 39, 00:14:55.221 "qid": 0, 00:14:55.221 "state": "enabled", 00:14:55.221 "listen_address": { 00:14:55.221 "trtype": "TCP", 00:14:55.221 "adrfam": "IPv4", 00:14:55.221 "traddr": "10.0.0.2", 00:14:55.221 "trsvcid": "4420" 00:14:55.221 }, 00:14:55.221 "peer_address": { 00:14:55.221 "trtype": "TCP", 00:14:55.221 "adrfam": "IPv4", 00:14:55.221 "traddr": "10.0.0.1", 00:14:55.221 "trsvcid": "41040" 00:14:55.221 }, 00:14:55.221 "auth": { 00:14:55.221 "state": "completed", 00:14:55.221 "digest": "sha256", 00:14:55.221 "dhgroup": "ffdhe6144" 00:14:55.221 } 00:14:55.221 } 00:14:55.221 ]' 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.221 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.479 04:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.411 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:56.670 04:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:57.604 00:14:57.604 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:57.604 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:57.604 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:57.862 { 00:14:57.862 "cntlid": 41, 00:14:57.862 "qid": 0, 00:14:57.862 "state": "enabled", 00:14:57.862 "listen_address": { 00:14:57.862 "trtype": "TCP", 00:14:57.862 "adrfam": "IPv4", 00:14:57.862 "traddr": "10.0.0.2", 00:14:57.862 "trsvcid": "4420" 00:14:57.862 }, 00:14:57.862 "peer_address": { 00:14:57.862 "trtype": "TCP", 00:14:57.862 "adrfam": "IPv4", 00:14:57.862 "traddr": "10.0.0.1", 00:14:57.862 "trsvcid": "41056" 00:14:57.862 }, 00:14:57.862 "auth": { 00:14:57.862 "state": "completed", 00:14:57.862 "digest": "sha256", 00:14:57.862 "dhgroup": "ffdhe8192" 00:14:57.862 } 00:14:57.862 } 00:14:57.862 ]' 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:57.862 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.121 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:58.121 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.121 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.121 04:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.379 04:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.312 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:59.570 04:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:00.504 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:00.504 { 00:15:00.504 "cntlid": 43, 00:15:00.504 "qid": 0, 00:15:00.504 "state": "enabled", 00:15:00.504 "listen_address": { 00:15:00.504 "trtype": "TCP", 00:15:00.504 "adrfam": "IPv4", 00:15:00.504 "traddr": "10.0.0.2", 00:15:00.504 "trsvcid": "4420" 00:15:00.504 }, 00:15:00.504 "peer_address": { 00:15:00.504 "trtype": "TCP", 00:15:00.504 "adrfam": "IPv4", 00:15:00.504 "traddr": "10.0.0.1", 00:15:00.504 "trsvcid": "41080" 00:15:00.504 }, 00:15:00.504 "auth": { 00:15:00.504 "state": "completed", 00:15:00.504 "digest": "sha256", 00:15:00.504 "dhgroup": "ffdhe8192" 00:15:00.504 } 00:15:00.504 } 00:15:00.504 ]' 00:15:00.504 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:00.762 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.762 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:00.762 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.762 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:00.762 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.762 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.762 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.020 04:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.961 04:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:02.220 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:03.154 00:15:03.154 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:03.154 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:03.154 04:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:03.412 { 00:15:03.412 "cntlid": 45, 00:15:03.412 "qid": 0, 00:15:03.412 "state": "enabled", 00:15:03.412 "listen_address": { 00:15:03.412 "trtype": "TCP", 00:15:03.412 "adrfam": "IPv4", 00:15:03.412 "traddr": "10.0.0.2", 00:15:03.412 "trsvcid": "4420" 00:15:03.412 }, 00:15:03.412 "peer_address": { 00:15:03.412 "trtype": "TCP", 00:15:03.412 "adrfam": "IPv4", 00:15:03.412 "traddr": "10.0.0.1", 00:15:03.412 "trsvcid": "41102" 00:15:03.412 }, 00:15:03.412 "auth": { 00:15:03.412 "state": "completed", 00:15:03.412 "digest": "sha256", 00:15:03.412 "dhgroup": "ffdhe8192" 00:15:03.412 } 00:15:03.412 } 00:15:03.412 ]' 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.412 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.670 04:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.604 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.862 04:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.794 00:15:05.794 04:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:05.794 04:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:05.794 04:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.051 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.051 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.051 04:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.051 04:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.051 04:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.051 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:06.051 { 00:15:06.051 "cntlid": 47, 00:15:06.051 "qid": 0, 00:15:06.051 "state": "enabled", 00:15:06.051 "listen_address": { 00:15:06.051 "trtype": "TCP", 00:15:06.051 "adrfam": "IPv4", 00:15:06.051 "traddr": "10.0.0.2", 00:15:06.051 "trsvcid": "4420" 00:15:06.051 }, 00:15:06.051 "peer_address": { 00:15:06.051 "trtype": "TCP", 00:15:06.051 "adrfam": "IPv4", 00:15:06.051 "traddr": "10.0.0.1", 00:15:06.051 "trsvcid": "44006" 00:15:06.051 }, 00:15:06.051 "auth": { 00:15:06.051 "state": "completed", 00:15:06.051 "digest": "sha256", 00:15:06.051 "dhgroup": "ffdhe8192" 00:15:06.051 } 00:15:06.051 } 00:15:06.051 ]' 00:15:06.051 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:06.308 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.308 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:06.308 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.308 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:06.308 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.308 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.308 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.566 04:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.500 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:07.758 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:08.016 00:15:08.016 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:08.016 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.016 04:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:08.274 { 00:15:08.274 "cntlid": 49, 00:15:08.274 "qid": 0, 00:15:08.274 "state": "enabled", 00:15:08.274 "listen_address": { 00:15:08.274 "trtype": "TCP", 00:15:08.274 "adrfam": "IPv4", 00:15:08.274 "traddr": "10.0.0.2", 00:15:08.274 "trsvcid": "4420" 00:15:08.274 }, 00:15:08.274 "peer_address": { 00:15:08.274 "trtype": "TCP", 00:15:08.274 "adrfam": "IPv4", 00:15:08.274 "traddr": "10.0.0.1", 00:15:08.274 "trsvcid": "44020" 00:15:08.274 }, 00:15:08.274 "auth": { 00:15:08.274 "state": "completed", 00:15:08.274 "digest": "sha384", 00:15:08.274 "dhgroup": "null" 00:15:08.274 } 00:15:08.274 } 00:15:08.274 ]' 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:08.274 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:08.532 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.532 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.532 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.790 04:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:15:09.723 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.723 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.723 04:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.724 04:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.724 04:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.724 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:09.724 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.724 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.981 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:15:09.981 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:09.981 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.981 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.981 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.982 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:09.982 04:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.982 04:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.982 04:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.982 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:09.982 04:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:10.239 00:15:10.239 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:10.239 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:10.239 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:10.497 { 00:15:10.497 "cntlid": 51, 00:15:10.497 "qid": 0, 00:15:10.497 "state": "enabled", 00:15:10.497 "listen_address": { 00:15:10.497 "trtype": "TCP", 00:15:10.497 "adrfam": "IPv4", 00:15:10.497 "traddr": "10.0.0.2", 00:15:10.497 "trsvcid": "4420" 00:15:10.497 }, 00:15:10.497 "peer_address": { 00:15:10.497 "trtype": "TCP", 00:15:10.497 "adrfam": "IPv4", 00:15:10.497 "traddr": "10.0.0.1", 00:15:10.497 "trsvcid": "44034" 00:15:10.497 }, 00:15:10.497 "auth": { 00:15:10.497 "state": "completed", 00:15:10.497 "digest": "sha384", 00:15:10.497 "dhgroup": "null" 00:15:10.497 } 00:15:10.497 } 00:15:10.497 ]' 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.497 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.756 04:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.689 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:11.947 04:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:12.229 00:15:12.229 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:12.229 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:12.229 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.487 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.487 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.487 04:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.487 04:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.487 04:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.487 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:12.487 { 00:15:12.487 "cntlid": 53, 00:15:12.487 "qid": 0, 00:15:12.487 "state": "enabled", 00:15:12.487 "listen_address": { 00:15:12.487 "trtype": "TCP", 00:15:12.487 "adrfam": "IPv4", 00:15:12.487 "traddr": "10.0.0.2", 00:15:12.487 "trsvcid": "4420" 00:15:12.487 }, 00:15:12.487 "peer_address": { 00:15:12.487 "trtype": "TCP", 00:15:12.487 "adrfam": "IPv4", 00:15:12.487 "traddr": "10.0.0.1", 00:15:12.487 "trsvcid": "44058" 00:15:12.487 }, 00:15:12.487 "auth": { 00:15:12.487 "state": "completed", 00:15:12.487 "digest": "sha384", 00:15:12.487 "dhgroup": "null" 00:15:12.487 } 00:15:12.487 } 00:15:12.487 ]' 00:15:12.487 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:12.745 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.745 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:12.745 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:12.745 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:12.745 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.745 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.745 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.004 04:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.936 04:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.195 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.453 00:15:14.453 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:14.453 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:14.453 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.711 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.711 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.711 04:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.711 04:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.711 04:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.711 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:14.711 { 00:15:14.711 "cntlid": 55, 00:15:14.711 "qid": 0, 00:15:14.711 "state": "enabled", 00:15:14.711 "listen_address": { 00:15:14.711 "trtype": "TCP", 00:15:14.711 "adrfam": "IPv4", 00:15:14.711 "traddr": "10.0.0.2", 00:15:14.711 "trsvcid": "4420" 00:15:14.711 }, 00:15:14.711 "peer_address": { 00:15:14.711 "trtype": "TCP", 00:15:14.711 "adrfam": "IPv4", 00:15:14.711 "traddr": "10.0.0.1", 00:15:14.711 "trsvcid": "40640" 00:15:14.711 }, 00:15:14.711 "auth": { 00:15:14.711 "state": "completed", 00:15:14.711 "digest": "sha384", 00:15:14.711 "dhgroup": "null" 00:15:14.711 } 00:15:14.711 } 00:15:14.711 ]' 00:15:14.711 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:14.969 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.969 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:14.969 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:14.969 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:14.969 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.969 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.969 04:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.226 04:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:15:16.179 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.179 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.179 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.180 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.180 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.180 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.180 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:16.180 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.180 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:16.491 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:16.749 00:15:16.749 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:16.749 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.749 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:17.007 { 00:15:17.007 "cntlid": 57, 00:15:17.007 "qid": 0, 00:15:17.007 "state": "enabled", 00:15:17.007 "listen_address": { 00:15:17.007 "trtype": "TCP", 00:15:17.007 "adrfam": "IPv4", 00:15:17.007 "traddr": "10.0.0.2", 00:15:17.007 "trsvcid": "4420" 00:15:17.007 }, 00:15:17.007 "peer_address": { 00:15:17.007 "trtype": "TCP", 00:15:17.007 "adrfam": "IPv4", 00:15:17.007 "traddr": "10.0.0.1", 00:15:17.007 "trsvcid": "40668" 00:15:17.007 }, 00:15:17.007 "auth": { 00:15:17.007 "state": "completed", 00:15:17.007 "digest": "sha384", 00:15:17.007 "dhgroup": "ffdhe2048" 00:15:17.007 } 00:15:17.007 } 00:15:17.007 ]' 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.007 04:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:17.007 04:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:17.007 04:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:17.267 04:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.267 04:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.267 04:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.525 04:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.459 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.717 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:18.718 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:18.976 00:15:18.976 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:18.976 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:18.976 04:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.233 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.233 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.233 04:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:19.234 { 00:15:19.234 "cntlid": 59, 00:15:19.234 "qid": 0, 00:15:19.234 "state": "enabled", 00:15:19.234 "listen_address": { 00:15:19.234 "trtype": "TCP", 00:15:19.234 "adrfam": "IPv4", 00:15:19.234 "traddr": "10.0.0.2", 00:15:19.234 "trsvcid": "4420" 00:15:19.234 }, 00:15:19.234 "peer_address": { 00:15:19.234 "trtype": "TCP", 00:15:19.234 "adrfam": "IPv4", 00:15:19.234 "traddr": "10.0.0.1", 00:15:19.234 "trsvcid": "40684" 00:15:19.234 }, 00:15:19.234 "auth": { 00:15:19.234 "state": "completed", 00:15:19.234 "digest": "sha384", 00:15:19.234 "dhgroup": "ffdhe2048" 00:15:19.234 } 00:15:19.234 } 00:15:19.234 ]' 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.234 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.491 04:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.426 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.683 04:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.684 04:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.684 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:20.684 04:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:21.250 00:15:21.250 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:21.250 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:21.250 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:21.509 { 00:15:21.509 "cntlid": 61, 00:15:21.509 "qid": 0, 00:15:21.509 "state": "enabled", 00:15:21.509 "listen_address": { 00:15:21.509 "trtype": "TCP", 00:15:21.509 "adrfam": "IPv4", 00:15:21.509 "traddr": "10.0.0.2", 00:15:21.509 "trsvcid": "4420" 00:15:21.509 }, 00:15:21.509 "peer_address": { 00:15:21.509 "trtype": "TCP", 00:15:21.509 "adrfam": "IPv4", 00:15:21.509 "traddr": "10.0.0.1", 00:15:21.509 "trsvcid": "40728" 00:15:21.509 }, 00:15:21.509 "auth": { 00:15:21.509 "state": "completed", 00:15:21.509 "digest": "sha384", 00:15:21.509 "dhgroup": "ffdhe2048" 00:15:21.509 } 00:15:21.509 } 00:15:21.509 ]' 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.509 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.768 04:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.701 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.959 04:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.217 00:15:23.217 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:23.217 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.217 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:23.475 { 00:15:23.475 "cntlid": 63, 00:15:23.475 "qid": 0, 00:15:23.475 "state": "enabled", 00:15:23.475 "listen_address": { 00:15:23.475 "trtype": "TCP", 00:15:23.475 "adrfam": "IPv4", 00:15:23.475 "traddr": "10.0.0.2", 00:15:23.475 "trsvcid": "4420" 00:15:23.475 }, 00:15:23.475 "peer_address": { 00:15:23.475 "trtype": "TCP", 00:15:23.475 "adrfam": "IPv4", 00:15:23.475 "traddr": "10.0.0.1", 00:15:23.475 "trsvcid": "54024" 00:15:23.475 }, 00:15:23.475 "auth": { 00:15:23.475 "state": "completed", 00:15:23.475 "digest": "sha384", 00:15:23.475 "dhgroup": "ffdhe2048" 00:15:23.475 } 00:15:23.475 } 00:15:23.475 ]' 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.475 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:23.733 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.733 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:23.733 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.733 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.733 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.991 04:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.923 04:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.180 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:15:25.180 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:25.180 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:25.181 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:25.438 00:15:25.438 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:25.438 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:25.438 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:25.696 { 00:15:25.696 "cntlid": 65, 00:15:25.696 "qid": 0, 00:15:25.696 "state": "enabled", 00:15:25.696 "listen_address": { 00:15:25.696 "trtype": "TCP", 00:15:25.696 "adrfam": "IPv4", 00:15:25.696 "traddr": "10.0.0.2", 00:15:25.696 "trsvcid": "4420" 00:15:25.696 }, 00:15:25.696 "peer_address": { 00:15:25.696 "trtype": "TCP", 00:15:25.696 "adrfam": "IPv4", 00:15:25.696 "traddr": "10.0.0.1", 00:15:25.696 "trsvcid": "54048" 00:15:25.696 }, 00:15:25.696 "auth": { 00:15:25.696 "state": "completed", 00:15:25.696 "digest": "sha384", 00:15:25.696 "dhgroup": "ffdhe3072" 00:15:25.696 } 00:15:25.696 } 00:15:25.696 ]' 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.696 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:25.954 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.954 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:25.954 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.954 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.954 04:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.212 04:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:15:27.147 04:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.147 04:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.147 04:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.147 04:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.147 04:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.147 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:27.147 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.147 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:27.405 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:27.663 00:15:27.921 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:27.921 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.921 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:27.921 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.921 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.921 04:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.921 04:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.179 04:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.179 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:28.179 { 00:15:28.179 "cntlid": 67, 00:15:28.179 "qid": 0, 00:15:28.179 "state": "enabled", 00:15:28.179 "listen_address": { 00:15:28.179 "trtype": "TCP", 00:15:28.179 "adrfam": "IPv4", 00:15:28.179 "traddr": "10.0.0.2", 00:15:28.179 "trsvcid": "4420" 00:15:28.179 }, 00:15:28.179 "peer_address": { 00:15:28.179 "trtype": "TCP", 00:15:28.179 "adrfam": "IPv4", 00:15:28.179 "traddr": "10.0.0.1", 00:15:28.179 "trsvcid": "54054" 00:15:28.179 }, 00:15:28.179 "auth": { 00:15:28.179 "state": "completed", 00:15:28.179 "digest": "sha384", 00:15:28.179 "dhgroup": "ffdhe3072" 00:15:28.179 } 00:15:28.179 } 00:15:28.179 ]' 00:15:28.179 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:28.179 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.179 04:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:28.179 04:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.179 04:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:28.179 04:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.179 04:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.179 04:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.465 04:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.403 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:29.661 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:29.918 00:15:29.918 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:29.918 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:29.918 04:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:30.178 { 00:15:30.178 "cntlid": 69, 00:15:30.178 "qid": 0, 00:15:30.178 "state": "enabled", 00:15:30.178 "listen_address": { 00:15:30.178 "trtype": "TCP", 00:15:30.178 "adrfam": "IPv4", 00:15:30.178 "traddr": "10.0.0.2", 00:15:30.178 "trsvcid": "4420" 00:15:30.178 }, 00:15:30.178 "peer_address": { 00:15:30.178 "trtype": "TCP", 00:15:30.178 "adrfam": "IPv4", 00:15:30.178 "traddr": "10.0.0.1", 00:15:30.178 "trsvcid": "54080" 00:15:30.178 }, 00:15:30.178 "auth": { 00:15:30.178 "state": "completed", 00:15:30.178 "digest": "sha384", 00:15:30.178 "dhgroup": "ffdhe3072" 00:15:30.178 } 00:15:30.178 } 00:15:30.178 ]' 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.178 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:30.438 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.438 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:30.438 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.438 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.438 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.697 04:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.631 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.889 04:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.148 00:15:32.148 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:32.148 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.148 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:32.405 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:32.406 { 00:15:32.406 "cntlid": 71, 00:15:32.406 "qid": 0, 00:15:32.406 "state": "enabled", 00:15:32.406 "listen_address": { 00:15:32.406 "trtype": "TCP", 00:15:32.406 "adrfam": "IPv4", 00:15:32.406 "traddr": "10.0.0.2", 00:15:32.406 "trsvcid": "4420" 00:15:32.406 }, 00:15:32.406 "peer_address": { 00:15:32.406 "trtype": "TCP", 00:15:32.406 "adrfam": "IPv4", 00:15:32.406 "traddr": "10.0.0.1", 00:15:32.406 "trsvcid": "54098" 00:15:32.406 }, 00:15:32.406 "auth": { 00:15:32.406 "state": "completed", 00:15:32.406 "digest": "sha384", 00:15:32.406 "dhgroup": "ffdhe3072" 00:15:32.406 } 00:15:32.406 } 00:15:32.406 ]' 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.406 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:32.663 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.663 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.663 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.921 04:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.859 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:34.117 04:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:34.375 00:15:34.375 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:34.375 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:34.375 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:34.633 { 00:15:34.633 "cntlid": 73, 00:15:34.633 "qid": 0, 00:15:34.633 "state": "enabled", 00:15:34.633 "listen_address": { 00:15:34.633 "trtype": "TCP", 00:15:34.633 "adrfam": "IPv4", 00:15:34.633 "traddr": "10.0.0.2", 00:15:34.633 "trsvcid": "4420" 00:15:34.633 }, 00:15:34.633 "peer_address": { 00:15:34.633 "trtype": "TCP", 00:15:34.633 "adrfam": "IPv4", 00:15:34.633 "traddr": "10.0.0.1", 00:15:34.633 "trsvcid": "45550" 00:15:34.633 }, 00:15:34.633 "auth": { 00:15:34.633 "state": "completed", 00:15:34.633 "digest": "sha384", 00:15:34.633 "dhgroup": "ffdhe4096" 00:15:34.633 } 00:15:34.633 } 00:15:34.633 ]' 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.633 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:34.891 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.891 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:34.891 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.891 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.891 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.149 04:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.081 04:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:36.339 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:36.597 00:15:36.597 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:36.597 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:36.597 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:36.855 { 00:15:36.855 "cntlid": 75, 00:15:36.855 "qid": 0, 00:15:36.855 "state": "enabled", 00:15:36.855 "listen_address": { 00:15:36.855 "trtype": "TCP", 00:15:36.855 "adrfam": "IPv4", 00:15:36.855 "traddr": "10.0.0.2", 00:15:36.855 "trsvcid": "4420" 00:15:36.855 }, 00:15:36.855 "peer_address": { 00:15:36.855 "trtype": "TCP", 00:15:36.855 "adrfam": "IPv4", 00:15:36.855 "traddr": "10.0.0.1", 00:15:36.855 "trsvcid": "45574" 00:15:36.855 }, 00:15:36.855 "auth": { 00:15:36.855 "state": "completed", 00:15:36.855 "digest": "sha384", 00:15:36.855 "dhgroup": "ffdhe4096" 00:15:36.855 } 00:15:36.855 } 00:15:36.855 ]' 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.855 04:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.113 04:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:15:38.051 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.052 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.052 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.052 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.052 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.052 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:38.052 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.052 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:38.616 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:38.873 00:15:38.873 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:38.873 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:38.873 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.130 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.130 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.130 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.130 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 04:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.130 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:39.130 { 00:15:39.130 "cntlid": 77, 00:15:39.130 "qid": 0, 00:15:39.130 "state": "enabled", 00:15:39.130 "listen_address": { 00:15:39.130 "trtype": "TCP", 00:15:39.130 "adrfam": "IPv4", 00:15:39.130 "traddr": "10.0.0.2", 00:15:39.130 "trsvcid": "4420" 00:15:39.130 }, 00:15:39.130 "peer_address": { 00:15:39.130 "trtype": "TCP", 00:15:39.130 "adrfam": "IPv4", 00:15:39.130 "traddr": "10.0.0.1", 00:15:39.130 "trsvcid": "45590" 00:15:39.130 }, 00:15:39.130 "auth": { 00:15:39.130 "state": "completed", 00:15:39.130 "digest": "sha384", 00:15:39.130 "dhgroup": "ffdhe4096" 00:15:39.130 } 00:15:39.130 } 00:15:39.130 ]' 00:15:39.130 04:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:39.130 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.130 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:39.130 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.130 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:39.130 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.130 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.130 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.387 04:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.317 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.575 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.140 00:15:41.140 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:41.140 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:41.140 04:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.140 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.140 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.140 04:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.140 04:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.140 04:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.140 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:41.140 { 00:15:41.140 "cntlid": 79, 00:15:41.140 "qid": 0, 00:15:41.140 "state": "enabled", 00:15:41.140 "listen_address": { 00:15:41.140 "trtype": "TCP", 00:15:41.140 "adrfam": "IPv4", 00:15:41.140 "traddr": "10.0.0.2", 00:15:41.140 "trsvcid": "4420" 00:15:41.140 }, 00:15:41.140 "peer_address": { 00:15:41.140 "trtype": "TCP", 00:15:41.140 "adrfam": "IPv4", 00:15:41.140 "traddr": "10.0.0.1", 00:15:41.140 "trsvcid": "45624" 00:15:41.140 }, 00:15:41.140 "auth": { 00:15:41.140 "state": "completed", 00:15:41.140 "digest": "sha384", 00:15:41.140 "dhgroup": "ffdhe4096" 00:15:41.140 } 00:15:41.140 } 00:15:41.140 ]' 00:15:41.140 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:41.398 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.398 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:41.398 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.398 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:41.398 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.398 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.398 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.656 04:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.590 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:42.848 04:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:43.413 00:15:43.413 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:43.413 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:43.413 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:43.671 { 00:15:43.671 "cntlid": 81, 00:15:43.671 "qid": 0, 00:15:43.671 "state": "enabled", 00:15:43.671 "listen_address": { 00:15:43.671 "trtype": "TCP", 00:15:43.671 "adrfam": "IPv4", 00:15:43.671 "traddr": "10.0.0.2", 00:15:43.671 "trsvcid": "4420" 00:15:43.671 }, 00:15:43.671 "peer_address": { 00:15:43.671 "trtype": "TCP", 00:15:43.671 "adrfam": "IPv4", 00:15:43.671 "traddr": "10.0.0.1", 00:15:43.671 "trsvcid": "47274" 00:15:43.671 }, 00:15:43.671 "auth": { 00:15:43.671 "state": "completed", 00:15:43.671 "digest": "sha384", 00:15:43.671 "dhgroup": "ffdhe6144" 00:15:43.671 } 00:15:43.671 } 00:15:43.671 ]' 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.671 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.929 04:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:44.892 04:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.150 04:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.408 04:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.408 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:45.408 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:45.973 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:45.973 { 00:15:45.973 "cntlid": 83, 00:15:45.973 "qid": 0, 00:15:45.973 "state": "enabled", 00:15:45.973 "listen_address": { 00:15:45.973 "trtype": "TCP", 00:15:45.973 "adrfam": "IPv4", 00:15:45.973 "traddr": "10.0.0.2", 00:15:45.973 "trsvcid": "4420" 00:15:45.973 }, 00:15:45.973 "peer_address": { 00:15:45.973 "trtype": "TCP", 00:15:45.973 "adrfam": "IPv4", 00:15:45.973 "traddr": "10.0.0.1", 00:15:45.973 "trsvcid": "47300" 00:15:45.973 }, 00:15:45.973 "auth": { 00:15:45.973 "state": "completed", 00:15:45.973 "digest": "sha384", 00:15:45.973 "dhgroup": "ffdhe6144" 00:15:45.973 } 00:15:45.973 } 00:15:45.973 ]' 00:15:45.973 04:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:46.231 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.231 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:46.231 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:46.231 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:46.231 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.231 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.231 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.489 04:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.422 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:47.681 04:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:48.246 00:15:48.246 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:48.246 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:48.246 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.504 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.504 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.504 04:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.504 04:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.504 04:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.504 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:48.504 { 00:15:48.504 "cntlid": 85, 00:15:48.504 "qid": 0, 00:15:48.504 "state": "enabled", 00:15:48.504 "listen_address": { 00:15:48.504 "trtype": "TCP", 00:15:48.504 "adrfam": "IPv4", 00:15:48.504 "traddr": "10.0.0.2", 00:15:48.504 "trsvcid": "4420" 00:15:48.504 }, 00:15:48.504 "peer_address": { 00:15:48.504 "trtype": "TCP", 00:15:48.504 "adrfam": "IPv4", 00:15:48.504 "traddr": "10.0.0.1", 00:15:48.504 "trsvcid": "47314" 00:15:48.504 }, 00:15:48.504 "auth": { 00:15:48.504 "state": "completed", 00:15:48.504 "digest": "sha384", 00:15:48.504 "dhgroup": "ffdhe6144" 00:15:48.504 } 00:15:48.504 } 00:15:48.504 ]' 00:15:48.504 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:48.505 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.505 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:48.505 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.505 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:48.505 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.505 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.505 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.763 04:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.697 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.955 04:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.212 04:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.212 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.212 04:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.778 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:50.779 { 00:15:50.779 "cntlid": 87, 00:15:50.779 "qid": 0, 00:15:50.779 "state": "enabled", 00:15:50.779 "listen_address": { 00:15:50.779 "trtype": "TCP", 00:15:50.779 "adrfam": "IPv4", 00:15:50.779 "traddr": "10.0.0.2", 00:15:50.779 "trsvcid": "4420" 00:15:50.779 }, 00:15:50.779 "peer_address": { 00:15:50.779 "trtype": "TCP", 00:15:50.779 "adrfam": "IPv4", 00:15:50.779 "traddr": "10.0.0.1", 00:15:50.779 "trsvcid": "47330" 00:15:50.779 }, 00:15:50.779 "auth": { 00:15:50.779 "state": "completed", 00:15:50.779 "digest": "sha384", 00:15:50.779 "dhgroup": "ffdhe6144" 00:15:50.779 } 00:15:50.779 } 00:15:50.779 ]' 00:15:50.779 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:51.037 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.037 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:51.037 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.037 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:51.037 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.037 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.037 04:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.295 04:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.229 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.486 04:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.487 04:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.487 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:52.487 04:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:53.422 00:15:53.422 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:53.422 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.422 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:53.680 { 00:15:53.680 "cntlid": 89, 00:15:53.680 "qid": 0, 00:15:53.680 "state": "enabled", 00:15:53.680 "listen_address": { 00:15:53.680 "trtype": "TCP", 00:15:53.680 "adrfam": "IPv4", 00:15:53.680 "traddr": "10.0.0.2", 00:15:53.680 "trsvcid": "4420" 00:15:53.680 }, 00:15:53.680 "peer_address": { 00:15:53.680 "trtype": "TCP", 00:15:53.680 "adrfam": "IPv4", 00:15:53.680 "traddr": "10.0.0.1", 00:15:53.680 "trsvcid": "49558" 00:15:53.680 }, 00:15:53.680 "auth": { 00:15:53.680 "state": "completed", 00:15:53.680 "digest": "sha384", 00:15:53.680 "dhgroup": "ffdhe8192" 00:15:53.680 } 00:15:53.680 } 00:15:53.680 ]' 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.680 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.246 04:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.180 04:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:55.438 04:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:56.371 00:15:56.371 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:56.371 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:56.371 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:56.630 { 00:15:56.630 "cntlid": 91, 00:15:56.630 "qid": 0, 00:15:56.630 "state": "enabled", 00:15:56.630 "listen_address": { 00:15:56.630 "trtype": "TCP", 00:15:56.630 "adrfam": "IPv4", 00:15:56.630 "traddr": "10.0.0.2", 00:15:56.630 "trsvcid": "4420" 00:15:56.630 }, 00:15:56.630 "peer_address": { 00:15:56.630 "trtype": "TCP", 00:15:56.630 "adrfam": "IPv4", 00:15:56.630 "traddr": "10.0.0.1", 00:15:56.630 "trsvcid": "49576" 00:15:56.630 }, 00:15:56.630 "auth": { 00:15:56.630 "state": "completed", 00:15:56.630 "digest": "sha384", 00:15:56.630 "dhgroup": "ffdhe8192" 00:15:56.630 } 00:15:56.630 } 00:15:56.630 ]' 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.630 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.887 04:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.822 04:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:58.390 04:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:59.324 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:59.324 { 00:15:59.324 "cntlid": 93, 00:15:59.324 "qid": 0, 00:15:59.324 "state": "enabled", 00:15:59.324 "listen_address": { 00:15:59.324 "trtype": "TCP", 00:15:59.324 "adrfam": "IPv4", 00:15:59.324 "traddr": "10.0.0.2", 00:15:59.324 "trsvcid": "4420" 00:15:59.324 }, 00:15:59.324 "peer_address": { 00:15:59.324 "trtype": "TCP", 00:15:59.324 "adrfam": "IPv4", 00:15:59.324 "traddr": "10.0.0.1", 00:15:59.324 "trsvcid": "49594" 00:15:59.324 }, 00:15:59.324 "auth": { 00:15:59.324 "state": "completed", 00:15:59.324 "digest": "sha384", 00:15:59.324 "dhgroup": "ffdhe8192" 00:15:59.324 } 00:15:59.324 } 00:15:59.324 ]' 00:15:59.324 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:59.611 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.611 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:59.611 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.611 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:59.611 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.611 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.611 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.870 04:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.804 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.062 04:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.997 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.997 04:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:01.997 { 00:16:01.997 "cntlid": 95, 00:16:01.997 "qid": 0, 00:16:01.997 "state": "enabled", 00:16:01.997 "listen_address": { 00:16:01.997 "trtype": "TCP", 00:16:01.997 "adrfam": "IPv4", 00:16:01.997 "traddr": "10.0.0.2", 00:16:01.997 "trsvcid": "4420" 00:16:01.997 }, 00:16:01.997 "peer_address": { 00:16:01.997 "trtype": "TCP", 00:16:01.997 "adrfam": "IPv4", 00:16:01.997 "traddr": "10.0.0.1", 00:16:01.997 "trsvcid": "49632" 00:16:01.997 }, 00:16:01.997 "auth": { 00:16:01.997 "state": "completed", 00:16:01.997 "digest": "sha384", 00:16:01.997 "dhgroup": "ffdhe8192" 00:16:01.997 } 00:16:01.997 } 00:16:01.997 ]' 00:16:01.997 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:02.255 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.255 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:02.255 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:02.255 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:02.255 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.255 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.255 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.514 04:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.449 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:03.706 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:03.965 00:16:03.965 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:03.965 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:03.965 04:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.223 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.223 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:04.224 { 00:16:04.224 "cntlid": 97, 00:16:04.224 "qid": 0, 00:16:04.224 "state": "enabled", 00:16:04.224 "listen_address": { 00:16:04.224 "trtype": "TCP", 00:16:04.224 "adrfam": "IPv4", 00:16:04.224 "traddr": "10.0.0.2", 00:16:04.224 "trsvcid": "4420" 00:16:04.224 }, 00:16:04.224 "peer_address": { 00:16:04.224 "trtype": "TCP", 00:16:04.224 "adrfam": "IPv4", 00:16:04.224 "traddr": "10.0.0.1", 00:16:04.224 "trsvcid": "39082" 00:16:04.224 }, 00:16:04.224 "auth": { 00:16:04.224 "state": "completed", 00:16:04.224 "digest": "sha512", 00:16:04.224 "dhgroup": "null" 00:16:04.224 } 00:16:04.224 } 00:16:04.224 ]' 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:04.224 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:04.482 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.482 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.482 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.482 04:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:05.860 04:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:06.118 00:16:06.118 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:06.118 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:06.118 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:06.376 { 00:16:06.376 "cntlid": 99, 00:16:06.376 "qid": 0, 00:16:06.376 "state": "enabled", 00:16:06.376 "listen_address": { 00:16:06.376 "trtype": "TCP", 00:16:06.376 "adrfam": "IPv4", 00:16:06.376 "traddr": "10.0.0.2", 00:16:06.376 "trsvcid": "4420" 00:16:06.376 }, 00:16:06.376 "peer_address": { 00:16:06.376 "trtype": "TCP", 00:16:06.376 "adrfam": "IPv4", 00:16:06.376 "traddr": "10.0.0.1", 00:16:06.376 "trsvcid": "39096" 00:16:06.376 }, 00:16:06.376 "auth": { 00:16:06.376 "state": "completed", 00:16:06.376 "digest": "sha512", 00:16:06.376 "dhgroup": "null" 00:16:06.376 } 00:16:06.376 } 00:16:06.376 ]' 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:06.376 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:06.634 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.634 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.634 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.891 04:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.826 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.083 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:16:08.083 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:08.083 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.083 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:08.083 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.083 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:08.084 04:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.084 04:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.084 04:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.084 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:08.084 04:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:08.341 00:16:08.341 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:08.341 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:08.341 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:08.600 { 00:16:08.600 "cntlid": 101, 00:16:08.600 "qid": 0, 00:16:08.600 "state": "enabled", 00:16:08.600 "listen_address": { 00:16:08.600 "trtype": "TCP", 00:16:08.600 "adrfam": "IPv4", 00:16:08.600 "traddr": "10.0.0.2", 00:16:08.600 "trsvcid": "4420" 00:16:08.600 }, 00:16:08.600 "peer_address": { 00:16:08.600 "trtype": "TCP", 00:16:08.600 "adrfam": "IPv4", 00:16:08.600 "traddr": "10.0.0.1", 00:16:08.600 "trsvcid": "39114" 00:16:08.600 }, 00:16:08.600 "auth": { 00:16:08.600 "state": "completed", 00:16:08.600 "digest": "sha512", 00:16:08.600 "dhgroup": "null" 00:16:08.600 } 00:16:08.600 } 00:16:08.600 ]' 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.600 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.859 04:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:16:09.790 04:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.790 04:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.791 04:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.791 04:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.791 04:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.791 04:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:09.791 04:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.791 04:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.047 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.613 00:16:10.613 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:10.613 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:10.613 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:10.871 { 00:16:10.871 "cntlid": 103, 00:16:10.871 "qid": 0, 00:16:10.871 "state": "enabled", 00:16:10.871 "listen_address": { 00:16:10.871 "trtype": "TCP", 00:16:10.871 "adrfam": "IPv4", 00:16:10.871 "traddr": "10.0.0.2", 00:16:10.871 "trsvcid": "4420" 00:16:10.871 }, 00:16:10.871 "peer_address": { 00:16:10.871 "trtype": "TCP", 00:16:10.871 "adrfam": "IPv4", 00:16:10.871 "traddr": "10.0.0.1", 00:16:10.871 "trsvcid": "39140" 00:16:10.871 }, 00:16:10.871 "auth": { 00:16:10.871 "state": "completed", 00:16:10.871 "digest": "sha512", 00:16:10.871 "dhgroup": "null" 00:16:10.871 } 00:16:10.871 } 00:16:10.871 ]' 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.871 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.129 04:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.062 04:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:12.320 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:12.885 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:12.885 { 00:16:12.885 "cntlid": 105, 00:16:12.885 "qid": 0, 00:16:12.885 "state": "enabled", 00:16:12.885 "listen_address": { 00:16:12.885 "trtype": "TCP", 00:16:12.885 "adrfam": "IPv4", 00:16:12.885 "traddr": "10.0.0.2", 00:16:12.885 "trsvcid": "4420" 00:16:12.885 }, 00:16:12.885 "peer_address": { 00:16:12.885 "trtype": "TCP", 00:16:12.885 "adrfam": "IPv4", 00:16:12.885 "traddr": "10.0.0.1", 00:16:12.885 "trsvcid": "34320" 00:16:12.885 }, 00:16:12.885 "auth": { 00:16:12.885 "state": "completed", 00:16:12.885 "digest": "sha512", 00:16:12.885 "dhgroup": "ffdhe2048" 00:16:12.885 } 00:16:12.885 } 00:16:12.885 ]' 00:16:12.885 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:13.142 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.142 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:13.142 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.142 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:13.142 04:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.142 04:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.142 04:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.400 04:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:16:14.334 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.334 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.334 04:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.334 04:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.334 04:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.334 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:14.334 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.335 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:14.593 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:14.900 00:16:14.900 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:14.900 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:14.900 04:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:15.177 { 00:16:15.177 "cntlid": 107, 00:16:15.177 "qid": 0, 00:16:15.177 "state": "enabled", 00:16:15.177 "listen_address": { 00:16:15.177 "trtype": "TCP", 00:16:15.177 "adrfam": "IPv4", 00:16:15.177 "traddr": "10.0.0.2", 00:16:15.177 "trsvcid": "4420" 00:16:15.177 }, 00:16:15.177 "peer_address": { 00:16:15.177 "trtype": "TCP", 00:16:15.177 "adrfam": "IPv4", 00:16:15.177 "traddr": "10.0.0.1", 00:16:15.177 "trsvcid": "34344" 00:16:15.177 }, 00:16:15.177 "auth": { 00:16:15.177 "state": "completed", 00:16:15.177 "digest": "sha512", 00:16:15.177 "dhgroup": "ffdhe2048" 00:16:15.177 } 00:16:15.177 } 00:16:15.177 ]' 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.177 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:15.435 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.435 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:15.435 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.435 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.435 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.692 04:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:16:16.626 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.626 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.626 04:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.626 04:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.626 04:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.626 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:16.626 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.627 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:16.885 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:17.143 00:16:17.143 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:17.143 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:17.143 04:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:17.401 { 00:16:17.401 "cntlid": 109, 00:16:17.401 "qid": 0, 00:16:17.401 "state": "enabled", 00:16:17.401 "listen_address": { 00:16:17.401 "trtype": "TCP", 00:16:17.401 "adrfam": "IPv4", 00:16:17.401 "traddr": "10.0.0.2", 00:16:17.401 "trsvcid": "4420" 00:16:17.401 }, 00:16:17.401 "peer_address": { 00:16:17.401 "trtype": "TCP", 00:16:17.401 "adrfam": "IPv4", 00:16:17.401 "traddr": "10.0.0.1", 00:16:17.401 "trsvcid": "34366" 00:16:17.401 }, 00:16:17.401 "auth": { 00:16:17.401 "state": "completed", 00:16:17.401 "digest": "sha512", 00:16:17.401 "dhgroup": "ffdhe2048" 00:16:17.401 } 00:16:17.401 } 00:16:17.401 ]' 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.401 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.658 04:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.592 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.850 04:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.108 00:16:19.108 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:19.108 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:19.108 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.366 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.366 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.366 04:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.366 04:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.366 04:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.366 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:19.366 { 00:16:19.366 "cntlid": 111, 00:16:19.366 "qid": 0, 00:16:19.366 "state": "enabled", 00:16:19.366 "listen_address": { 00:16:19.366 "trtype": "TCP", 00:16:19.366 "adrfam": "IPv4", 00:16:19.366 "traddr": "10.0.0.2", 00:16:19.366 "trsvcid": "4420" 00:16:19.366 }, 00:16:19.366 "peer_address": { 00:16:19.366 "trtype": "TCP", 00:16:19.366 "adrfam": "IPv4", 00:16:19.366 "traddr": "10.0.0.1", 00:16:19.366 "trsvcid": "34396" 00:16:19.366 }, 00:16:19.366 "auth": { 00:16:19.366 "state": "completed", 00:16:19.366 "digest": "sha512", 00:16:19.366 "dhgroup": "ffdhe2048" 00:16:19.366 } 00:16:19.366 } 00:16:19.366 ]' 00:16:19.366 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:19.624 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.624 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:19.624 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.624 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:19.624 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.624 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.624 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.882 04:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.815 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:21.072 04:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:21.330 00:16:21.330 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:21.330 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:21.330 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:21.588 { 00:16:21.588 "cntlid": 113, 00:16:21.588 "qid": 0, 00:16:21.588 "state": "enabled", 00:16:21.588 "listen_address": { 00:16:21.588 "trtype": "TCP", 00:16:21.588 "adrfam": "IPv4", 00:16:21.588 "traddr": "10.0.0.2", 00:16:21.588 "trsvcid": "4420" 00:16:21.588 }, 00:16:21.588 "peer_address": { 00:16:21.588 "trtype": "TCP", 00:16:21.588 "adrfam": "IPv4", 00:16:21.588 "traddr": "10.0.0.1", 00:16:21.588 "trsvcid": "34426" 00:16:21.588 }, 00:16:21.588 "auth": { 00:16:21.588 "state": "completed", 00:16:21.588 "digest": "sha512", 00:16:21.588 "dhgroup": "ffdhe3072" 00:16:21.588 } 00:16:21.588 } 00:16:21.588 ]' 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.588 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:21.847 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.847 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.847 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.106 04:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.039 04:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:23.297 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:23.554 00:16:23.554 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:23.554 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:23.554 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:23.813 { 00:16:23.813 "cntlid": 115, 00:16:23.813 "qid": 0, 00:16:23.813 "state": "enabled", 00:16:23.813 "listen_address": { 00:16:23.813 "trtype": "TCP", 00:16:23.813 "adrfam": "IPv4", 00:16:23.813 "traddr": "10.0.0.2", 00:16:23.813 "trsvcid": "4420" 00:16:23.813 }, 00:16:23.813 "peer_address": { 00:16:23.813 "trtype": "TCP", 00:16:23.813 "adrfam": "IPv4", 00:16:23.813 "traddr": "10.0.0.1", 00:16:23.813 "trsvcid": "46638" 00:16:23.813 }, 00:16:23.813 "auth": { 00:16:23.813 "state": "completed", 00:16:23.813 "digest": "sha512", 00:16:23.813 "dhgroup": "ffdhe3072" 00:16:23.813 } 00:16:23.813 } 00:16:23.813 ]' 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.813 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:24.071 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.071 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.071 04:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.329 04:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.264 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:25.523 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:25.780 00:16:25.780 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:25.780 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:25.780 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:26.038 { 00:16:26.038 "cntlid": 117, 00:16:26.038 "qid": 0, 00:16:26.038 "state": "enabled", 00:16:26.038 "listen_address": { 00:16:26.038 "trtype": "TCP", 00:16:26.038 "adrfam": "IPv4", 00:16:26.038 "traddr": "10.0.0.2", 00:16:26.038 "trsvcid": "4420" 00:16:26.038 }, 00:16:26.038 "peer_address": { 00:16:26.038 "trtype": "TCP", 00:16:26.038 "adrfam": "IPv4", 00:16:26.038 "traddr": "10.0.0.1", 00:16:26.038 "trsvcid": "46658" 00:16:26.038 }, 00:16:26.038 "auth": { 00:16:26.038 "state": "completed", 00:16:26.038 "digest": "sha512", 00:16:26.038 "dhgroup": "ffdhe3072" 00:16:26.038 } 00:16:26.038 } 00:16:26.038 ]' 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.038 04:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:26.038 04:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.038 04:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:26.038 04:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.038 04:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.038 04:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.296 04:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:16:27.230 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.230 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.230 04:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.230 04:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.488 04:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.488 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:27.488 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.488 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.746 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.004 00:16:28.004 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:28.004 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:28.004 04:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:28.262 { 00:16:28.262 "cntlid": 119, 00:16:28.262 "qid": 0, 00:16:28.262 "state": "enabled", 00:16:28.262 "listen_address": { 00:16:28.262 "trtype": "TCP", 00:16:28.262 "adrfam": "IPv4", 00:16:28.262 "traddr": "10.0.0.2", 00:16:28.262 "trsvcid": "4420" 00:16:28.262 }, 00:16:28.262 "peer_address": { 00:16:28.262 "trtype": "TCP", 00:16:28.262 "adrfam": "IPv4", 00:16:28.262 "traddr": "10.0.0.1", 00:16:28.262 "trsvcid": "46678" 00:16:28.262 }, 00:16:28.262 "auth": { 00:16:28.262 "state": "completed", 00:16:28.262 "digest": "sha512", 00:16:28.262 "dhgroup": "ffdhe3072" 00:16:28.262 } 00:16:28.262 } 00:16:28.262 ]' 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.262 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.828 04:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:29.799 04:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:30.365 00:16:30.365 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:30.365 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:30.365 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:30.624 { 00:16:30.624 "cntlid": 121, 00:16:30.624 "qid": 0, 00:16:30.624 "state": "enabled", 00:16:30.624 "listen_address": { 00:16:30.624 "trtype": "TCP", 00:16:30.624 "adrfam": "IPv4", 00:16:30.624 "traddr": "10.0.0.2", 00:16:30.624 "trsvcid": "4420" 00:16:30.624 }, 00:16:30.624 "peer_address": { 00:16:30.624 "trtype": "TCP", 00:16:30.624 "adrfam": "IPv4", 00:16:30.624 "traddr": "10.0.0.1", 00:16:30.624 "trsvcid": "46720" 00:16:30.624 }, 00:16:30.624 "auth": { 00:16:30.624 "state": "completed", 00:16:30.624 "digest": "sha512", 00:16:30.624 "dhgroup": "ffdhe4096" 00:16:30.624 } 00:16:30.624 } 00:16:30.624 ]' 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.624 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.882 04:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.815 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:32.074 04:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:32.639 00:16:32.639 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:32.639 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:32.639 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.639 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.639 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:32.897 { 00:16:32.897 "cntlid": 123, 00:16:32.897 "qid": 0, 00:16:32.897 "state": "enabled", 00:16:32.897 "listen_address": { 00:16:32.897 "trtype": "TCP", 00:16:32.897 "adrfam": "IPv4", 00:16:32.897 "traddr": "10.0.0.2", 00:16:32.897 "trsvcid": "4420" 00:16:32.897 }, 00:16:32.897 "peer_address": { 00:16:32.897 "trtype": "TCP", 00:16:32.897 "adrfam": "IPv4", 00:16:32.897 "traddr": "10.0.0.1", 00:16:32.897 "trsvcid": "46754" 00:16:32.897 }, 00:16:32.897 "auth": { 00:16:32.897 "state": "completed", 00:16:32.897 "digest": "sha512", 00:16:32.897 "dhgroup": "ffdhe4096" 00:16:32.897 } 00:16:32.897 } 00:16:32.897 ]' 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.897 04:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.155 04:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.094 04:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:34.352 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:34.919 00:16:34.919 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:34.919 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:34.919 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.919 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.919 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.919 04:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.919 04:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.178 04:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.178 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:35.178 { 00:16:35.178 "cntlid": 125, 00:16:35.178 "qid": 0, 00:16:35.178 "state": "enabled", 00:16:35.178 "listen_address": { 00:16:35.178 "trtype": "TCP", 00:16:35.178 "adrfam": "IPv4", 00:16:35.178 "traddr": "10.0.0.2", 00:16:35.178 "trsvcid": "4420" 00:16:35.178 }, 00:16:35.178 "peer_address": { 00:16:35.178 "trtype": "TCP", 00:16:35.178 "adrfam": "IPv4", 00:16:35.178 "traddr": "10.0.0.1", 00:16:35.178 "trsvcid": "56748" 00:16:35.178 }, 00:16:35.178 "auth": { 00:16:35.178 "state": "completed", 00:16:35.178 "digest": "sha512", 00:16:35.178 "dhgroup": "ffdhe4096" 00:16:35.178 } 00:16:35.178 } 00:16:35.178 ]' 00:16:35.178 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:35.178 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.178 04:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:35.178 04:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.178 04:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:35.178 04:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.178 04:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.178 04:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.437 04:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:16:36.374 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.374 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.374 04:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.374 04:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.374 04:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.375 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:36.375 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.375 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.632 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.198 00:16:37.198 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:37.198 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:37.198 04:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.198 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.198 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.198 04:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.198 04:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:37.456 { 00:16:37.456 "cntlid": 127, 00:16:37.456 "qid": 0, 00:16:37.456 "state": "enabled", 00:16:37.456 "listen_address": { 00:16:37.456 "trtype": "TCP", 00:16:37.456 "adrfam": "IPv4", 00:16:37.456 "traddr": "10.0.0.2", 00:16:37.456 "trsvcid": "4420" 00:16:37.456 }, 00:16:37.456 "peer_address": { 00:16:37.456 "trtype": "TCP", 00:16:37.456 "adrfam": "IPv4", 00:16:37.456 "traddr": "10.0.0.1", 00:16:37.456 "trsvcid": "56782" 00:16:37.456 }, 00:16:37.456 "auth": { 00:16:37.456 "state": "completed", 00:16:37.456 "digest": "sha512", 00:16:37.456 "dhgroup": "ffdhe4096" 00:16:37.456 } 00:16:37.456 } 00:16:37.456 ]' 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.456 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.713 04:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.648 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:38.906 04:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:39.472 00:16:39.472 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:39.472 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:39.472 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:39.731 { 00:16:39.731 "cntlid": 129, 00:16:39.731 "qid": 0, 00:16:39.731 "state": "enabled", 00:16:39.731 "listen_address": { 00:16:39.731 "trtype": "TCP", 00:16:39.731 "adrfam": "IPv4", 00:16:39.731 "traddr": "10.0.0.2", 00:16:39.731 "trsvcid": "4420" 00:16:39.731 }, 00:16:39.731 "peer_address": { 00:16:39.731 "trtype": "TCP", 00:16:39.731 "adrfam": "IPv4", 00:16:39.731 "traddr": "10.0.0.1", 00:16:39.731 "trsvcid": "56816" 00:16:39.731 }, 00:16:39.731 "auth": { 00:16:39.731 "state": "completed", 00:16:39.731 "digest": "sha512", 00:16:39.731 "dhgroup": "ffdhe6144" 00:16:39.731 } 00:16:39.731 } 00:16:39.731 ]' 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.731 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.989 04:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.362 04:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.362 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:41.363 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:41.927 00:16:41.927 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:41.927 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:41.927 04:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:42.185 { 00:16:42.185 "cntlid": 131, 00:16:42.185 "qid": 0, 00:16:42.185 "state": "enabled", 00:16:42.185 "listen_address": { 00:16:42.185 "trtype": "TCP", 00:16:42.185 "adrfam": "IPv4", 00:16:42.185 "traddr": "10.0.0.2", 00:16:42.185 "trsvcid": "4420" 00:16:42.185 }, 00:16:42.185 "peer_address": { 00:16:42.185 "trtype": "TCP", 00:16:42.185 "adrfam": "IPv4", 00:16:42.185 "traddr": "10.0.0.1", 00:16:42.185 "trsvcid": "56838" 00:16:42.185 }, 00:16:42.185 "auth": { 00:16:42.185 "state": "completed", 00:16:42.185 "digest": "sha512", 00:16:42.185 "dhgroup": "ffdhe6144" 00:16:42.185 } 00:16:42.185 } 00:16:42.185 ]' 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.185 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.442 04:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.374 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.631 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:16:43.631 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:43.631 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.631 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.632 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.632 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:43.632 04:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.632 04:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.632 04:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.632 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:43.632 04:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:44.197 00:16:44.197 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:44.197 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:44.197 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.456 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.456 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.456 04:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.456 04:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.456 04:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.456 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:44.456 { 00:16:44.456 "cntlid": 133, 00:16:44.456 "qid": 0, 00:16:44.456 "state": "enabled", 00:16:44.456 "listen_address": { 00:16:44.456 "trtype": "TCP", 00:16:44.456 "adrfam": "IPv4", 00:16:44.456 "traddr": "10.0.0.2", 00:16:44.456 "trsvcid": "4420" 00:16:44.456 }, 00:16:44.456 "peer_address": { 00:16:44.456 "trtype": "TCP", 00:16:44.456 "adrfam": "IPv4", 00:16:44.456 "traddr": "10.0.0.1", 00:16:44.456 "trsvcid": "59528" 00:16:44.456 }, 00:16:44.456 "auth": { 00:16:44.456 "state": "completed", 00:16:44.456 "digest": "sha512", 00:16:44.456 "dhgroup": "ffdhe6144" 00:16:44.456 } 00:16:44.456 } 00:16:44.456 ]' 00:16:44.456 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:44.745 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.745 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:44.745 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.745 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:44.745 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.745 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.745 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.003 04:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.939 04:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.198 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.764 00:16:46.764 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:46.764 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:46.764 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.022 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.022 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.022 04:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.022 04:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.022 04:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.022 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:47.022 { 00:16:47.022 "cntlid": 135, 00:16:47.022 "qid": 0, 00:16:47.022 "state": "enabled", 00:16:47.023 "listen_address": { 00:16:47.023 "trtype": "TCP", 00:16:47.023 "adrfam": "IPv4", 00:16:47.023 "traddr": "10.0.0.2", 00:16:47.023 "trsvcid": "4420" 00:16:47.023 }, 00:16:47.023 "peer_address": { 00:16:47.023 "trtype": "TCP", 00:16:47.023 "adrfam": "IPv4", 00:16:47.023 "traddr": "10.0.0.1", 00:16:47.023 "trsvcid": "59546" 00:16:47.023 }, 00:16:47.023 "auth": { 00:16:47.023 "state": "completed", 00:16:47.023 "digest": "sha512", 00:16:47.023 "dhgroup": "ffdhe6144" 00:16:47.023 } 00:16:47.023 } 00:16:47.023 ]' 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.023 04:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.280 04:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.212 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:48.470 04:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:49.404 00:16:49.404 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:49.404 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:49.404 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:49.662 { 00:16:49.662 "cntlid": 137, 00:16:49.662 "qid": 0, 00:16:49.662 "state": "enabled", 00:16:49.662 "listen_address": { 00:16:49.662 "trtype": "TCP", 00:16:49.662 "adrfam": "IPv4", 00:16:49.662 "traddr": "10.0.0.2", 00:16:49.662 "trsvcid": "4420" 00:16:49.662 }, 00:16:49.662 "peer_address": { 00:16:49.662 "trtype": "TCP", 00:16:49.662 "adrfam": "IPv4", 00:16:49.662 "traddr": "10.0.0.1", 00:16:49.662 "trsvcid": "59580" 00:16:49.662 }, 00:16:49.662 "auth": { 00:16:49.662 "state": "completed", 00:16:49.662 "digest": "sha512", 00:16:49.662 "dhgroup": "ffdhe8192" 00:16:49.662 } 00:16:49.662 } 00:16:49.662 ]' 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.662 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:49.919 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.919 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:49.919 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.919 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.919 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.177 04:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.112 04:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:51.370 04:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:52.304 00:16:52.304 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:52.304 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:52.304 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:52.563 { 00:16:52.563 "cntlid": 139, 00:16:52.563 "qid": 0, 00:16:52.563 "state": "enabled", 00:16:52.563 "listen_address": { 00:16:52.563 "trtype": "TCP", 00:16:52.563 "adrfam": "IPv4", 00:16:52.563 "traddr": "10.0.0.2", 00:16:52.563 "trsvcid": "4420" 00:16:52.563 }, 00:16:52.563 "peer_address": { 00:16:52.563 "trtype": "TCP", 00:16:52.563 "adrfam": "IPv4", 00:16:52.563 "traddr": "10.0.0.1", 00:16:52.563 "trsvcid": "59598" 00:16:52.563 }, 00:16:52.563 "auth": { 00:16:52.563 "state": "completed", 00:16:52.563 "digest": "sha512", 00:16:52.563 "dhgroup": "ffdhe8192" 00:16:52.563 } 00:16:52.563 } 00:16:52.563 ]' 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:52.563 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.564 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.564 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.130 04:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YjIyOThkNzFjMTg4MjYwYTFiNTRmMjdlZjRmNWEzOWJN9Fjk: 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.063 04:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:54.321 04:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.254 00:16:55.254 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:55.254 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.254 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:55.512 { 00:16:55.512 "cntlid": 141, 00:16:55.512 "qid": 0, 00:16:55.512 "state": "enabled", 00:16:55.512 "listen_address": { 00:16:55.512 "trtype": "TCP", 00:16:55.512 "adrfam": "IPv4", 00:16:55.512 "traddr": "10.0.0.2", 00:16:55.512 "trsvcid": "4420" 00:16:55.512 }, 00:16:55.512 "peer_address": { 00:16:55.512 "trtype": "TCP", 00:16:55.512 "adrfam": "IPv4", 00:16:55.512 "traddr": "10.0.0.1", 00:16:55.512 "trsvcid": "55584" 00:16:55.512 }, 00:16:55.512 "auth": { 00:16:55.512 "state": "completed", 00:16:55.512 "digest": "sha512", 00:16:55.512 "dhgroup": "ffdhe8192" 00:16:55.512 } 00:16:55.512 } 00:16:55.512 ]' 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.512 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.770 04:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmNhYmQ0OGVkYjUyMDhmOWE3NGFkOTJkZjEzYWI3OGVjZmRjZmExODljNWJiMmExD53nPg==: 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.703 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.269 04:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.269 04:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.269 04:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.269 04:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.203 00:16:58.203 04:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:58.203 04:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:58.203 04:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.203 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.203 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.203 04:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.203 04:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.203 04:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.203 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:58.203 { 00:16:58.203 "cntlid": 143, 00:16:58.203 "qid": 0, 00:16:58.203 "state": "enabled", 00:16:58.203 "listen_address": { 00:16:58.203 "trtype": "TCP", 00:16:58.203 "adrfam": "IPv4", 00:16:58.203 "traddr": "10.0.0.2", 00:16:58.203 "trsvcid": "4420" 00:16:58.203 }, 00:16:58.203 "peer_address": { 00:16:58.203 "trtype": "TCP", 00:16:58.203 "adrfam": "IPv4", 00:16:58.203 "traddr": "10.0.0.1", 00:16:58.203 "trsvcid": "55604" 00:16:58.203 }, 00:16:58.203 "auth": { 00:16:58.203 "state": "completed", 00:16:58.203 "digest": "sha512", 00:16:58.203 "dhgroup": "ffdhe8192" 00:16:58.203 } 00:16:58.203 } 00:16:58.203 ]' 00:16:58.203 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:58.460 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.460 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:58.460 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.460 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:58.460 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.460 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.461 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.719 04:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2Y3NWJhNjEzNTI5NWRhYjVjNmVhZmE4NTljNDBkYWI3YjljZDBjZGEwNDYwZjUxZjU4M2Q3MmI3YWEwNTNjM8kLz8s=: 00:16:59.653 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.653 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.653 04:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.653 04:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.654 04:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.654 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:16:59.654 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:16:59.654 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:16:59.654 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.654 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.654 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:59.911 04:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:00.882 00:17:00.882 04:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:00.882 04:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:00.882 04:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.140 04:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.140 04:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.140 04:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.140 04:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.140 04:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.140 04:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:01.140 { 00:17:01.140 "cntlid": 145, 00:17:01.140 "qid": 0, 00:17:01.140 "state": "enabled", 00:17:01.140 "listen_address": { 00:17:01.140 "trtype": "TCP", 00:17:01.140 "adrfam": "IPv4", 00:17:01.140 "traddr": "10.0.0.2", 00:17:01.140 "trsvcid": "4420" 00:17:01.140 }, 00:17:01.140 "peer_address": { 00:17:01.140 "trtype": "TCP", 00:17:01.140 "adrfam": "IPv4", 00:17:01.140 "traddr": "10.0.0.1", 00:17:01.140 "trsvcid": "55624" 00:17:01.140 }, 00:17:01.140 "auth": { 00:17:01.140 "state": "completed", 00:17:01.140 "digest": "sha512", 00:17:01.140 "dhgroup": "ffdhe8192" 00:17:01.140 } 00:17:01.140 } 00:17:01.140 ]' 00:17:01.140 04:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:01.140 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.140 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:01.140 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.140 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:01.140 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.140 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.140 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.398 04:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:OWI1NThmNTZlMmQxYTI2OWFjMThjMmY4ODRlNzViNGU3ZDY3NGE4ZmRlYmEwZjhiNegQqg==: 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.332 04:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.266 request: 00:17:03.266 { 00:17:03.266 "name": "nvme0", 00:17:03.266 "trtype": "tcp", 00:17:03.266 "traddr": "10.0.0.2", 00:17:03.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:03.266 "adrfam": "ipv4", 00:17:03.266 "trsvcid": "4420", 00:17:03.266 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.266 "dhchap_key": "key2", 00:17:03.266 "method": "bdev_nvme_attach_controller", 00:17:03.266 "req_id": 1 00:17:03.266 } 00:17:03.266 Got JSON-RPC error response 00:17:03.266 response: 00:17:03.266 { 00:17:03.266 "code": -32602, 00:17:03.266 "message": "Invalid parameters" 00:17:03.266 } 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3369305 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3369305 ']' 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3369305 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3369305 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:03.266 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3369305' 00:17:03.266 killing process with pid 3369305 00:17:03.267 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3369305 00:17:03.267 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3369305 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.832 rmmod nvme_tcp 00:17:03.832 rmmod nvme_fabrics 00:17:03.832 rmmod nvme_keyring 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3369169 ']' 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3369169 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3369169 ']' 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3369169 00:17:03.832 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:03.833 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:03.833 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3369169 00:17:03.833 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:03.833 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:03.833 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3369169' 00:17:03.833 killing process with pid 3369169 00:17:03.833 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3369169 00:17:03.833 04:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3369169 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.092 04:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.626 04:17:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.627 04:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.l7s /tmp/spdk.key-sha256.tQ4 /tmp/spdk.key-sha384.vTz /tmp/spdk.key-sha512.gvn /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:06.627 00:17:06.627 real 2m59.696s 00:17:06.627 user 6m57.353s 00:17:06.627 sys 0m21.388s 00:17:06.627 04:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:06.627 04:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 ************************************ 00:17:06.627 END TEST nvmf_auth_target 00:17:06.627 ************************************ 00:17:06.627 04:17:54 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:06.627 04:17:54 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:06.627 04:17:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:06.627 04:17:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:06.627 04:17:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 ************************************ 00:17:06.627 START TEST nvmf_bdevio_no_huge 00:17:06.627 ************************************ 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:06.627 * Looking for test storage... 00:17:06.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.627 04:17:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.739 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.739 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.739 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.739 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.739 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:17:08.996 00:17:08.996 --- 10.0.0.2 ping statistics --- 00:17:08.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.996 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:17:08.996 00:17:08.996 --- 10.0.0.1 ping statistics --- 00:17:08.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.996 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3393712 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3393712 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3393712 ']' 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:08.996 04:17:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.996 [2024-05-15 04:17:56.917716] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:08.996 [2024-05-15 04:17:56.917817] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:08.996 [2024-05-15 04:17:57.004911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.253 [2024-05-15 04:17:57.111336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.253 [2024-05-15 04:17:57.111400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.253 [2024-05-15 04:17:57.111413] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.253 [2024-05-15 04:17:57.111424] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.253 [2024-05-15 04:17:57.111433] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.253 [2024-05-15 04:17:57.111559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:09.253 [2024-05-15 04:17:57.111623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:09.253 [2024-05-15 04:17:57.111688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:09.253 [2024-05-15 04:17:57.111690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.188 [2024-05-15 04:17:57.902212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.188 Malloc0 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.188 [2024-05-15 04:17:57.940130] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:10.188 [2024-05-15 04:17:57.940442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.188 { 00:17:10.188 "params": { 00:17:10.188 "name": "Nvme$subsystem", 00:17:10.188 "trtype": "$TEST_TRANSPORT", 00:17:10.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.188 "adrfam": "ipv4", 00:17:10.188 "trsvcid": "$NVMF_PORT", 00:17:10.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.188 "hdgst": ${hdgst:-false}, 00:17:10.188 "ddgst": ${ddgst:-false} 00:17:10.188 }, 00:17:10.188 "method": "bdev_nvme_attach_controller" 00:17:10.188 } 00:17:10.188 EOF 00:17:10.188 )") 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:10.188 04:17:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.188 "params": { 00:17:10.188 "name": "Nvme1", 00:17:10.188 "trtype": "tcp", 00:17:10.188 "traddr": "10.0.0.2", 00:17:10.188 "adrfam": "ipv4", 00:17:10.188 "trsvcid": "4420", 00:17:10.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.188 "hdgst": false, 00:17:10.188 "ddgst": false 00:17:10.188 }, 00:17:10.188 "method": "bdev_nvme_attach_controller" 00:17:10.188 }' 00:17:10.188 [2024-05-15 04:17:57.985229] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:10.188 [2024-05-15 04:17:57.985322] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3393870 ] 00:17:10.188 [2024-05-15 04:17:58.058164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.188 [2024-05-15 04:17:58.174512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.188 [2024-05-15 04:17:58.174559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.188 [2024-05-15 04:17:58.174562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.447 I/O targets: 00:17:10.447 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:10.447 00:17:10.447 00:17:10.447 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.447 http://cunit.sourceforge.net/ 00:17:10.447 00:17:10.447 00:17:10.447 Suite: bdevio tests on: Nvme1n1 00:17:10.705 Test: blockdev write read block ...passed 00:17:10.705 Test: blockdev write zeroes read block ...passed 00:17:10.705 Test: blockdev write zeroes read no split ...passed 00:17:10.705 Test: blockdev write zeroes read split ...passed 00:17:10.705 Test: blockdev write zeroes read split partial ...passed 00:17:10.705 Test: blockdev reset ...[2024-05-15 04:17:58.678458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.705 [2024-05-15 04:17:58.678567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0d340 (9): Bad file descriptor 00:17:10.963 [2024-05-15 04:17:58.735294] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.963 passed 00:17:10.963 Test: blockdev write read 8 blocks ...passed 00:17:10.963 Test: blockdev write read size > 128k ...passed 00:17:10.963 Test: blockdev write read invalid size ...passed 00:17:10.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.963 Test: blockdev write read max offset ...passed 00:17:10.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.963 Test: blockdev writev readv 8 blocks ...passed 00:17:10.963 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.963 Test: blockdev writev readv block ...passed 00:17:10.963 Test: blockdev writev readv size > 128k ...passed 00:17:10.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:10.963 Test: blockdev comparev and writev ...[2024-05-15 04:17:58.912019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.912058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.963 [2024-05-15 04:17:58.912083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.912102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:10.963 [2024-05-15 04:17:58.912530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.912556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:10.963 [2024-05-15 04:17:58.912577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.912594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:10.963 [2024-05-15 04:17:58.913039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.913065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:10.963 [2024-05-15 04:17:58.913087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.913104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:10.963 [2024-05-15 04:17:58.913520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.913550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:10.963 [2024-05-15 04:17:58.913573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.963 [2024-05-15 04:17:58.913589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:10.963 passed 00:17:11.222 Test: blockdev nvme passthru rw ...passed 00:17:11.222 Test: blockdev nvme passthru vendor specific ...[2024-05-15 04:17:58.995398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.222 [2024-05-15 04:17:58.995426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:11.222 [2024-05-15 04:17:58.995649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.222 [2024-05-15 04:17:58.995672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:11.222 [2024-05-15 04:17:58.995888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.222 [2024-05-15 04:17:58.995911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:11.222 [2024-05-15 04:17:58.996135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.222 [2024-05-15 04:17:58.996159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:11.222 passed 00:17:11.222 Test: blockdev nvme admin passthru ...passed 00:17:11.222 Test: blockdev copy ...passed 00:17:11.222 00:17:11.222 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.222 suites 1 1 n/a 0 0 00:17:11.222 tests 23 23 23 0 0 00:17:11.222 asserts 152 152 152 0 n/a 00:17:11.222 00:17:11.222 Elapsed time = 1.185 seconds 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.481 rmmod nvme_tcp 00:17:11.481 rmmod nvme_fabrics 00:17:11.481 rmmod nvme_keyring 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3393712 ']' 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3393712 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3393712 ']' 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3393712 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3393712 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3393712' 00:17:11.481 killing process with pid 3393712 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3393712 00:17:11.481 [2024-05-15 04:17:59.496233] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:11.481 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3393712 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.048 04:17:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.583 04:18:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.583 00:17:14.583 real 0m7.861s 00:17:14.583 user 0m14.287s 00:17:14.583 sys 0m2.993s 00:17:14.583 04:18:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:14.583 04:18:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.583 ************************************ 00:17:14.583 END TEST nvmf_bdevio_no_huge 00:17:14.583 ************************************ 00:17:14.583 04:18:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:14.583 04:18:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:14.583 04:18:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:14.583 04:18:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.583 ************************************ 00:17:14.583 START TEST nvmf_tls 00:17:14.583 ************************************ 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:14.583 * Looking for test storage... 00:17:14.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.583 04:18:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.584 04:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.484 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.484 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.742 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.742 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.743 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:17:16.743 00:17:16.743 --- 10.0.0.2 ping statistics --- 00:17:16.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.743 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:16.743 00:17:16.743 --- 10.0.0.1 ping statistics --- 00:17:16.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.743 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3396468 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3396468 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3396468 ']' 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.743 04:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.743 [2024-05-15 04:18:04.736898] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:16.743 [2024-05-15 04:18:04.737014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.001 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.001 [2024-05-15 04:18:04.821585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.001 [2024-05-15 04:18:04.937312] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.001 [2024-05-15 04:18:04.937376] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.001 [2024-05-15 04:18:04.937392] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.001 [2024-05-15 04:18:04.937405] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.001 [2024-05-15 04:18:04.937424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.001 [2024-05-15 04:18:04.937455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:17.937 true 00:17:17.937 04:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:18.200 04:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:18.200 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:18.200 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:18.200 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:18.463 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:18.463 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:18.720 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:18.720 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:18.720 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:18.978 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:18.978 04:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:19.235 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:19.235 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:19.235 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:19.235 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:19.493 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:19.493 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:19.493 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:19.752 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:19.752 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:20.010 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:20.010 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:20.010 04:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:20.268 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:20.268 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:20.526 04:18:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.rpE9gBMRwv 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.PpeQ7KWugV 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.rpE9gBMRwv 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PpeQ7KWugV 00:17:20.783 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:21.042 04:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:21.300 04:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.rpE9gBMRwv 00:17:21.300 04:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rpE9gBMRwv 00:17:21.300 04:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:21.559 [2024-05-15 04:18:09.379056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.559 04:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:21.817 04:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:22.076 [2024-05-15 04:18:09.868329] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:22.076 [2024-05-15 04:18:09.868457] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:22.076 [2024-05-15 04:18:09.868677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.076 04:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:22.334 malloc0 00:17:22.334 04:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:22.592 04:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rpE9gBMRwv 00:17:22.850 [2024-05-15 04:18:10.638941] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:22.850 04:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rpE9gBMRwv 00:17:22.850 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.818 Initializing NVMe Controllers 00:17:32.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:32.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:32.818 Initialization complete. Launching workers. 00:17:32.818 ======================================================== 00:17:32.818 Latency(us) 00:17:32.818 Device Information : IOPS MiB/s Average min max 00:17:32.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7693.20 30.05 8321.43 1171.94 9984.91 00:17:32.818 ======================================================== 00:17:32.818 Total : 7693.20 30.05 8321.43 1171.94 9984.91 00:17:32.818 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rpE9gBMRwv 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rpE9gBMRwv' 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3398871 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3398871 /var/tmp/bdevperf.sock 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3398871 ']' 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.818 04:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.819 04:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.819 04:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.819 04:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.819 [2024-05-15 04:18:20.798131] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:32.819 [2024-05-15 04:18:20.798230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398871 ] 00:17:32.819 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.077 [2024-05-15 04:18:20.868018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.077 [2024-05-15 04:18:20.971810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.077 04:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:33.077 04:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:33.077 04:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rpE9gBMRwv 00:17:33.643 [2024-05-15 04:18:21.355703] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.643 [2024-05-15 04:18:21.355817] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:33.643 TLSTESTn1 00:17:33.643 04:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:33.643 Running I/O for 10 seconds... 00:17:45.838 00:17:45.838 Latency(us) 00:17:45.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.838 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:45.838 Verification LBA range: start 0x0 length 0x2000 00:17:45.838 TLSTESTn1 : 10.07 1268.64 4.96 0.00 0.00 100558.04 8883.77 141363.58 00:17:45.838 =================================================================================================================== 00:17:45.838 Total : 1268.64 4.96 0.00 0.00 100558.04 8883.77 141363.58 00:17:45.838 0 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3398871 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3398871 ']' 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3398871 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3398871 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:45.838 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3398871' 00:17:45.839 killing process with pid 3398871 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3398871 00:17:45.839 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.839 00:17:45.839 Latency(us) 00:17:45.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.839 =================================================================================================================== 00:17:45.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.839 [2024-05-15 04:18:31.707079] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3398871 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PpeQ7KWugV 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PpeQ7KWugV 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PpeQ7KWugV 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PpeQ7KWugV' 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3400188 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3400188 /var/tmp/bdevperf.sock 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3400188 ']' 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.839 04:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.839 [2024-05-15 04:18:32.026757] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:45.839 [2024-05-15 04:18:32.026864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400188 ] 00:17:45.839 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.839 [2024-05-15 04:18:32.107914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.839 [2024-05-15 04:18:32.213132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.839 04:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:45.839 04:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:45.839 04:18:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PpeQ7KWugV 00:17:45.839 [2024-05-15 04:18:33.201416] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.839 [2024-05-15 04:18:33.201541] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.839 [2024-05-15 04:18:33.210067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:45.839 [2024-05-15 04:18:33.210500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d3130 (107): Transport endpoint is not connected 00:17:45.839 [2024-05-15 04:18:33.211489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d3130 (9): Bad file descriptor 00:17:45.839 [2024-05-15 04:18:33.212489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.839 [2024-05-15 04:18:33.212508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:45.839 [2024-05-15 04:18:33.212540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.839 request: 00:17:45.839 { 00:17:45.839 "name": "TLSTEST", 00:17:45.839 "trtype": "tcp", 00:17:45.839 "traddr": "10.0.0.2", 00:17:45.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.839 "adrfam": "ipv4", 00:17:45.839 "trsvcid": "4420", 00:17:45.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.839 "psk": "/tmp/tmp.PpeQ7KWugV", 00:17:45.839 "method": "bdev_nvme_attach_controller", 00:17:45.839 "req_id": 1 00:17:45.839 } 00:17:45.839 Got JSON-RPC error response 00:17:45.839 response: 00:17:45.839 { 00:17:45.839 "code": -32602, 00:17:45.839 "message": "Invalid parameters" 00:17:45.839 } 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3400188 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3400188 ']' 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3400188 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400188 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400188' 00:17:45.839 killing process with pid 3400188 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3400188 00:17:45.839 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.839 00:17:45.839 Latency(us) 00:17:45.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.839 =================================================================================================================== 00:17:45.839 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.839 [2024-05-15 04:18:33.260521] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3400188 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rpE9gBMRwv 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rpE9gBMRwv 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rpE9gBMRwv 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rpE9gBMRwv' 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3400338 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3400338 /var/tmp/bdevperf.sock 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3400338 ']' 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.839 04:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.839 [2024-05-15 04:18:33.565352] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:45.839 [2024-05-15 04:18:33.565445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400338 ] 00:17:45.839 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.839 [2024-05-15 04:18:33.635645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.839 [2024-05-15 04:18:33.743919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.771 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.771 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:46.771 04:18:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.rpE9gBMRwv 00:17:46.771 [2024-05-15 04:18:34.764670] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.771 [2024-05-15 04:18:34.764790] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:46.771 [2024-05-15 04:18:34.774024] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:46.771 [2024-05-15 04:18:34.774072] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:46.771 [2024-05-15 04:18:34.774114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:46.771 [2024-05-15 04:18:34.774757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83130 (107): Transport endpoint is not connected 00:17:46.771 [2024-05-15 04:18:34.775742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83130 (9): Bad file descriptor 00:17:46.771 [2024-05-15 04:18:34.776743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:46.771 [2024-05-15 04:18:34.776763] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:46.771 [2024-05-15 04:18:34.776797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:46.771 request: 00:17:46.771 { 00:17:46.771 "name": "TLSTEST", 00:17:46.771 "trtype": "tcp", 00:17:46.771 "traddr": "10.0.0.2", 00:17:46.771 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:46.771 "adrfam": "ipv4", 00:17:46.771 "trsvcid": "4420", 00:17:46.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.771 "psk": "/tmp/tmp.rpE9gBMRwv", 00:17:46.771 "method": "bdev_nvme_attach_controller", 00:17:46.771 "req_id": 1 00:17:46.771 } 00:17:46.771 Got JSON-RPC error response 00:17:46.771 response: 00:17:46.771 { 00:17:46.771 "code": -32602, 00:17:46.771 "message": "Invalid parameters" 00:17:46.771 } 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3400338 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3400338 ']' 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3400338 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400338 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400338' 00:17:47.030 killing process with pid 3400338 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3400338 00:17:47.030 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.030 00:17:47.030 Latency(us) 00:17:47.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.030 =================================================================================================================== 00:17:47.030 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.030 [2024-05-15 04:18:34.831302] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.030 04:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3400338 00:17:47.288 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:47.288 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:47.288 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.288 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.288 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rpE9gBMRwv 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rpE9gBMRwv 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rpE9gBMRwv 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rpE9gBMRwv' 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3400486 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3400486 /var/tmp/bdevperf.sock 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3400486 ']' 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:47.289 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.289 [2024-05-15 04:18:35.135905] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:47.289 [2024-05-15 04:18:35.135995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400486 ] 00:17:47.289 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.289 [2024-05-15 04:18:35.203001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.547 [2024-05-15 04:18:35.313191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.547 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.547 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:47.547 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rpE9gBMRwv 00:17:47.820 [2024-05-15 04:18:35.650547] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.820 [2024-05-15 04:18:35.650673] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:47.820 [2024-05-15 04:18:35.656397] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:47.820 [2024-05-15 04:18:35.656429] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:47.820 [2024-05-15 04:18:35.656483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.821 [2024-05-15 04:18:35.656654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a5130 (107): Transport endpoint is not connected 00:17:47.821 [2024-05-15 04:18:35.657642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a5130 (9): Bad file descriptor 00:17:47.821 [2024-05-15 04:18:35.658641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:47.821 [2024-05-15 04:18:35.658663] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.821 [2024-05-15 04:18:35.658696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:47.821 request: 00:17:47.821 { 00:17:47.821 "name": "TLSTEST", 00:17:47.821 "trtype": "tcp", 00:17:47.821 "traddr": "10.0.0.2", 00:17:47.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.821 "adrfam": "ipv4", 00:17:47.821 "trsvcid": "4420", 00:17:47.821 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:47.821 "psk": "/tmp/tmp.rpE9gBMRwv", 00:17:47.821 "method": "bdev_nvme_attach_controller", 00:17:47.821 "req_id": 1 00:17:47.821 } 00:17:47.821 Got JSON-RPC error response 00:17:47.821 response: 00:17:47.821 { 00:17:47.821 "code": -32602, 00:17:47.821 "message": "Invalid parameters" 00:17:47.821 } 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3400486 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3400486 ']' 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3400486 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400486 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400486' 00:17:47.821 killing process with pid 3400486 00:17:47.821 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3400486 00:17:47.821 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.821 00:17:47.821 Latency(us) 00:17:47.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.822 =================================================================================================================== 00:17:47.822 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.822 [2024-05-15 04:18:35.712187] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.822 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3400486 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3400626 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3400626 /var/tmp/bdevperf.sock 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3400626 ']' 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:48.152 04:18:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 [2024-05-15 04:18:36.018438] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:48.152 [2024-05-15 04:18:36.018518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400626 ] 00:17:48.152 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.152 [2024-05-15 04:18:36.084869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.410 [2024-05-15 04:18:36.190604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.410 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:48.410 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:48.410 04:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:48.668 [2024-05-15 04:18:36.558714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:48.668 [2024-05-15 04:18:36.559958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502ab0 (9): Bad file descriptor 00:17:48.668 [2024-05-15 04:18:36.560954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:48.668 [2024-05-15 04:18:36.560990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:48.668 [2024-05-15 04:18:36.561008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:48.668 request: 00:17:48.668 { 00:17:48.668 "name": "TLSTEST", 00:17:48.668 "trtype": "tcp", 00:17:48.668 "traddr": "10.0.0.2", 00:17:48.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.668 "adrfam": "ipv4", 00:17:48.668 "trsvcid": "4420", 00:17:48.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.668 "method": "bdev_nvme_attach_controller", 00:17:48.668 "req_id": 1 00:17:48.668 } 00:17:48.668 Got JSON-RPC error response 00:17:48.668 response: 00:17:48.668 { 00:17:48.668 "code": -32602, 00:17:48.668 "message": "Invalid parameters" 00:17:48.668 } 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3400626 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3400626 ']' 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3400626 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400626 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400626' 00:17:48.668 killing process with pid 3400626 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3400626 00:17:48.668 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.668 00:17:48.668 Latency(us) 00:17:48.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.668 =================================================================================================================== 00:17:48.668 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.668 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3400626 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3396468 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3396468 ']' 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3396468 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3396468 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3396468' 00:17:48.926 killing process with pid 3396468 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3396468 00:17:48.926 [2024-05-15 04:18:36.893677] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:48.926 [2024-05-15 04:18:36.893757] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:48.926 04:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3396468 00:17:49.185 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:49.185 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:49.185 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:49.185 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:49.185 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:49.185 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:49.185 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.5Q5vkkqPe8 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.5Q5vkkqPe8 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3400780 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3400780 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3400780 ']' 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:49.444 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.444 [2024-05-15 04:18:37.286063] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:49.444 [2024-05-15 04:18:37.286140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.444 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.444 [2024-05-15 04:18:37.364946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.703 [2024-05-15 04:18:37.482565] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.703 [2024-05-15 04:18:37.482617] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.703 [2024-05-15 04:18:37.482633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.703 [2024-05-15 04:18:37.482646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.703 [2024-05-15 04:18:37.482667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.703 [2024-05-15 04:18:37.482699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.5Q5vkkqPe8 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5Q5vkkqPe8 00:17:49.703 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.960 [2024-05-15 04:18:37.845728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.960 04:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:50.219 04:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:50.477 [2024-05-15 04:18:38.318948] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:50.477 [2024-05-15 04:18:38.319035] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.477 [2024-05-15 04:18:38.319259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.477 04:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.735 malloc0 00:17:50.735 04:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.993 04:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Q5vkkqPe8 00:17:51.252 [2024-05-15 04:18:39.048448] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5Q5vkkqPe8 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5Q5vkkqPe8' 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3401063 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3401063 /var/tmp/bdevperf.sock 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3401063 ']' 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:51.252 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.252 [2024-05-15 04:18:39.104207] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:17:51.252 [2024-05-15 04:18:39.104291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401063 ] 00:17:51.252 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.252 [2024-05-15 04:18:39.173636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.511 [2024-05-15 04:18:39.284560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.511 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:51.511 04:18:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:51.511 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Q5vkkqPe8 00:17:51.769 [2024-05-15 04:18:39.618862] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.769 [2024-05-15 04:18:39.619008] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:51.769 TLSTESTn1 00:17:51.769 04:18:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:52.027 Running I/O for 10 seconds... 00:18:01.986 00:18:01.987 Latency(us) 00:18:01.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.987 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:01.987 Verification LBA range: start 0x0 length 0x2000 00:18:01.987 TLSTESTn1 : 10.09 1210.03 4.73 0.00 0.00 105427.96 5922.51 139033.41 00:18:01.987 =================================================================================================================== 00:18:01.987 Total : 1210.03 4.73 0.00 0.00 105427.96 5922.51 139033.41 00:18:01.987 0 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3401063 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3401063 ']' 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3401063 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3401063 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3401063' 00:18:01.987 killing process with pid 3401063 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3401063 00:18:01.987 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.987 00:18:01.987 Latency(us) 00:18:01.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.987 =================================================================================================================== 00:18:01.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.987 [2024-05-15 04:18:49.974959] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:01.987 04:18:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3401063 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.5Q5vkkqPe8 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5Q5vkkqPe8 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5Q5vkkqPe8 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5Q5vkkqPe8 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5Q5vkkqPe8' 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3402374 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3402374 /var/tmp/bdevperf.sock 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3402374 ']' 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:02.245 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.503 [2024-05-15 04:18:50.289292] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:02.504 [2024-05-15 04:18:50.289390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402374 ] 00:18:02.504 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.504 [2024-05-15 04:18:50.359837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.504 [2024-05-15 04:18:50.465893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.762 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.762 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:02.762 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Q5vkkqPe8 00:18:03.021 [2024-05-15 04:18:50.804732] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.021 [2024-05-15 04:18:50.804819] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:03.021 [2024-05-15 04:18:50.804834] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.5Q5vkkqPe8 00:18:03.021 request: 00:18:03.021 { 00:18:03.021 "name": "TLSTEST", 00:18:03.021 "trtype": "tcp", 00:18:03.021 "traddr": "10.0.0.2", 00:18:03.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.021 "adrfam": "ipv4", 00:18:03.021 "trsvcid": "4420", 00:18:03.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.021 "psk": "/tmp/tmp.5Q5vkkqPe8", 00:18:03.021 "method": "bdev_nvme_attach_controller", 00:18:03.021 "req_id": 1 00:18:03.021 } 00:18:03.021 Got JSON-RPC error response 00:18:03.021 response: 00:18:03.021 { 00:18:03.021 "code": -1, 00:18:03.021 "message": "Operation not permitted" 00:18:03.021 } 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3402374 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3402374 ']' 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3402374 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3402374 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3402374' 00:18:03.021 killing process with pid 3402374 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3402374 00:18:03.021 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.021 00:18:03.021 Latency(us) 00:18:03.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.021 =================================================================================================================== 00:18:03.021 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.021 04:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3402374 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3400780 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3400780 ']' 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3400780 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400780 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400780' 00:18:03.280 killing process with pid 3400780 00:18:03.280 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3400780 00:18:03.280 [2024-05-15 04:18:51.144533] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:03.280 [2024-05-15 04:18:51.144588] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:03.281 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3400780 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3402522 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3402522 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3402522 ']' 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:03.539 04:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.539 [2024-05-15 04:18:51.489222] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:03.539 [2024-05-15 04:18:51.489316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.539 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.797 [2024-05-15 04:18:51.570277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.797 [2024-05-15 04:18:51.683088] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.797 [2024-05-15 04:18:51.683153] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.797 [2024-05-15 04:18:51.683169] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.797 [2024-05-15 04:18:51.683183] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.797 [2024-05-15 04:18:51.683195] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.797 [2024-05-15 04:18:51.683238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.5Q5vkkqPe8 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5Q5vkkqPe8 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.5Q5vkkqPe8 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5Q5vkkqPe8 00:18:04.731 04:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.990 [2024-05-15 04:18:52.764129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.990 04:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.248 04:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.506 [2024-05-15 04:18:53.289482] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:05.506 [2024-05-15 04:18:53.289593] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.506 [2024-05-15 04:18:53.289827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.506 04:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.764 malloc0 00:18:05.764 04:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.021 04:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Q5vkkqPe8 00:18:06.279 [2024-05-15 04:18:54.103438] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:06.279 [2024-05-15 04:18:54.103479] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:06.279 [2024-05-15 04:18:54.103517] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:06.279 request: 00:18:06.279 { 00:18:06.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.279 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.279 "psk": "/tmp/tmp.5Q5vkkqPe8", 00:18:06.279 "method": "nvmf_subsystem_add_host", 00:18:06.279 "req_id": 1 00:18:06.279 } 00:18:06.279 Got JSON-RPC error response 00:18:06.279 response: 00:18:06.279 { 00:18:06.279 "code": -32603, 00:18:06.279 "message": "Internal error" 00:18:06.279 } 00:18:06.279 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:06.279 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:06.279 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:06.279 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:06.279 04:18:54 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3402522 00:18:06.279 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3402522 ']' 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3402522 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3402522 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3402522' 00:18:06.280 killing process with pid 3402522 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3402522 00:18:06.280 [2024-05-15 04:18:54.156297] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:06.280 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3402522 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.5Q5vkkqPe8 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3402831 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3402831 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3402831 ']' 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:06.538 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.538 [2024-05-15 04:18:54.476589] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:06.538 [2024-05-15 04:18:54.476665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.538 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.797 [2024-05-15 04:18:54.557903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.797 [2024-05-15 04:18:54.675823] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.797 [2024-05-15 04:18:54.675885] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.797 [2024-05-15 04:18:54.675902] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.797 [2024-05-15 04:18:54.675915] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.797 [2024-05-15 04:18:54.675927] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.797 [2024-05-15 04:18:54.675979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.5Q5vkkqPe8 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5Q5vkkqPe8 00:18:06.797 04:18:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:07.056 [2024-05-15 04:18:55.029835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.056 04:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:07.314 04:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:07.572 [2024-05-15 04:18:55.551148] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:07.572 [2024-05-15 04:18:55.551241] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.572 [2024-05-15 04:18:55.551465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.572 04:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:07.831 malloc0 00:18:08.090 04:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:08.349 04:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Q5vkkqPe8 00:18:08.615 [2024-05-15 04:18:56.369572] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3403114 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3403114 /var/tmp/bdevperf.sock 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3403114 ']' 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:08.615 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.615 [2024-05-15 04:18:56.432573] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:08.615 [2024-05-15 04:18:56.432644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403114 ] 00:18:08.615 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.615 [2024-05-15 04:18:56.500289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.615 [2024-05-15 04:18:56.607610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.902 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:08.902 04:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:08.902 04:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Q5vkkqPe8 00:18:09.160 [2024-05-15 04:18:56.992467] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.160 [2024-05-15 04:18:56.992590] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.160 TLSTESTn1 00:18:09.160 04:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:09.419 04:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:09.419 "subsystems": [ 00:18:09.419 { 00:18:09.419 "subsystem": "keyring", 00:18:09.419 "config": [] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "iobuf", 00:18:09.419 "config": [ 00:18:09.419 { 00:18:09.419 "method": "iobuf_set_options", 00:18:09.419 "params": { 00:18:09.419 "small_pool_count": 8192, 00:18:09.419 "large_pool_count": 1024, 00:18:09.419 "small_bufsize": 8192, 00:18:09.419 "large_bufsize": 135168 00:18:09.419 } 00:18:09.419 } 00:18:09.419 ] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "sock", 00:18:09.419 "config": [ 00:18:09.419 { 00:18:09.419 "method": "sock_impl_set_options", 00:18:09.419 "params": { 00:18:09.419 "impl_name": "posix", 00:18:09.419 "recv_buf_size": 2097152, 00:18:09.419 "send_buf_size": 2097152, 00:18:09.419 "enable_recv_pipe": true, 00:18:09.419 "enable_quickack": false, 00:18:09.419 "enable_placement_id": 0, 00:18:09.419 "enable_zerocopy_send_server": true, 00:18:09.419 "enable_zerocopy_send_client": false, 00:18:09.419 "zerocopy_threshold": 0, 00:18:09.419 "tls_version": 0, 00:18:09.419 "enable_ktls": false 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "sock_impl_set_options", 00:18:09.419 "params": { 00:18:09.419 "impl_name": "ssl", 00:18:09.419 "recv_buf_size": 4096, 00:18:09.419 "send_buf_size": 4096, 00:18:09.419 "enable_recv_pipe": true, 00:18:09.419 "enable_quickack": false, 00:18:09.419 "enable_placement_id": 0, 00:18:09.419 "enable_zerocopy_send_server": true, 00:18:09.419 "enable_zerocopy_send_client": false, 00:18:09.419 "zerocopy_threshold": 0, 00:18:09.419 "tls_version": 0, 00:18:09.419 "enable_ktls": false 00:18:09.419 } 00:18:09.419 } 00:18:09.419 ] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "vmd", 00:18:09.419 "config": [] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "accel", 00:18:09.419 "config": [ 00:18:09.419 { 00:18:09.419 "method": "accel_set_options", 00:18:09.419 "params": { 00:18:09.419 "small_cache_size": 128, 00:18:09.419 "large_cache_size": 16, 00:18:09.419 "task_count": 2048, 00:18:09.419 "sequence_count": 2048, 00:18:09.419 "buf_count": 2048 00:18:09.419 } 00:18:09.419 } 00:18:09.419 ] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "bdev", 00:18:09.419 "config": [ 00:18:09.419 { 00:18:09.419 "method": "bdev_set_options", 00:18:09.419 "params": { 00:18:09.419 "bdev_io_pool_size": 65535, 00:18:09.419 "bdev_io_cache_size": 256, 00:18:09.419 "bdev_auto_examine": true, 00:18:09.419 "iobuf_small_cache_size": 128, 00:18:09.419 "iobuf_large_cache_size": 16 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "bdev_raid_set_options", 00:18:09.419 "params": { 00:18:09.419 "process_window_size_kb": 1024 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "bdev_iscsi_set_options", 00:18:09.419 "params": { 00:18:09.419 "timeout_sec": 30 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "bdev_nvme_set_options", 00:18:09.419 "params": { 00:18:09.419 "action_on_timeout": "none", 00:18:09.419 "timeout_us": 0, 00:18:09.419 "timeout_admin_us": 0, 00:18:09.419 "keep_alive_timeout_ms": 10000, 00:18:09.419 "arbitration_burst": 0, 00:18:09.419 "low_priority_weight": 0, 00:18:09.419 "medium_priority_weight": 0, 00:18:09.419 "high_priority_weight": 0, 00:18:09.419 "nvme_adminq_poll_period_us": 10000, 00:18:09.419 "nvme_ioq_poll_period_us": 0, 00:18:09.419 "io_queue_requests": 0, 00:18:09.419 "delay_cmd_submit": true, 00:18:09.419 "transport_retry_count": 4, 00:18:09.419 "bdev_retry_count": 3, 00:18:09.419 "transport_ack_timeout": 0, 00:18:09.419 "ctrlr_loss_timeout_sec": 0, 00:18:09.419 "reconnect_delay_sec": 0, 00:18:09.419 "fast_io_fail_timeout_sec": 0, 00:18:09.419 "disable_auto_failback": false, 00:18:09.419 "generate_uuids": false, 00:18:09.419 "transport_tos": 0, 00:18:09.419 "nvme_error_stat": false, 00:18:09.419 "rdma_srq_size": 0, 00:18:09.419 "io_path_stat": false, 00:18:09.419 "allow_accel_sequence": false, 00:18:09.419 "rdma_max_cq_size": 0, 00:18:09.419 "rdma_cm_event_timeout_ms": 0, 00:18:09.419 "dhchap_digests": [ 00:18:09.419 "sha256", 00:18:09.419 "sha384", 00:18:09.419 "sha512" 00:18:09.419 ], 00:18:09.419 "dhchap_dhgroups": [ 00:18:09.419 "null", 00:18:09.419 "ffdhe2048", 00:18:09.419 "ffdhe3072", 00:18:09.419 "ffdhe4096", 00:18:09.419 "ffdhe6144", 00:18:09.419 "ffdhe8192" 00:18:09.419 ] 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "bdev_nvme_set_hotplug", 00:18:09.419 "params": { 00:18:09.419 "period_us": 100000, 00:18:09.419 "enable": false 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "bdev_malloc_create", 00:18:09.419 "params": { 00:18:09.419 "name": "malloc0", 00:18:09.419 "num_blocks": 8192, 00:18:09.419 "block_size": 4096, 00:18:09.419 "physical_block_size": 4096, 00:18:09.419 "uuid": "cb3feea9-7996-429f-8199-7cb2b1776d8b", 00:18:09.419 "optimal_io_boundary": 0 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "bdev_wait_for_examine" 00:18:09.419 } 00:18:09.419 ] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "nbd", 00:18:09.419 "config": [] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "scheduler", 00:18:09.419 "config": [ 00:18:09.419 { 00:18:09.419 "method": "framework_set_scheduler", 00:18:09.419 "params": { 00:18:09.419 "name": "static" 00:18:09.419 } 00:18:09.419 } 00:18:09.419 ] 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "subsystem": "nvmf", 00:18:09.419 "config": [ 00:18:09.419 { 00:18:09.419 "method": "nvmf_set_config", 00:18:09.419 "params": { 00:18:09.419 "discovery_filter": "match_any", 00:18:09.419 "admin_cmd_passthru": { 00:18:09.419 "identify_ctrlr": false 00:18:09.419 } 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "nvmf_set_max_subsystems", 00:18:09.419 "params": { 00:18:09.419 "max_subsystems": 1024 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "nvmf_set_crdt", 00:18:09.419 "params": { 00:18:09.419 "crdt1": 0, 00:18:09.419 "crdt2": 0, 00:18:09.419 "crdt3": 0 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "nvmf_create_transport", 00:18:09.419 "params": { 00:18:09.419 "trtype": "TCP", 00:18:09.419 "max_queue_depth": 128, 00:18:09.419 "max_io_qpairs_per_ctrlr": 127, 00:18:09.419 "in_capsule_data_size": 4096, 00:18:09.419 "max_io_size": 131072, 00:18:09.419 "io_unit_size": 131072, 00:18:09.419 "max_aq_depth": 128, 00:18:09.419 "num_shared_buffers": 511, 00:18:09.419 "buf_cache_size": 4294967295, 00:18:09.419 "dif_insert_or_strip": false, 00:18:09.419 "zcopy": false, 00:18:09.419 "c2h_success": false, 00:18:09.419 "sock_priority": 0, 00:18:09.419 "abort_timeout_sec": 1, 00:18:09.419 "ack_timeout": 0, 00:18:09.419 "data_wr_pool_size": 0 00:18:09.419 } 00:18:09.419 }, 00:18:09.419 { 00:18:09.419 "method": "nvmf_create_subsystem", 00:18:09.419 "params": { 00:18:09.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.419 "allow_any_host": false, 00:18:09.420 "serial_number": "SPDK00000000000001", 00:18:09.420 "model_number": "SPDK bdev Controller", 00:18:09.420 "max_namespaces": 10, 00:18:09.420 "min_cntlid": 1, 00:18:09.420 "max_cntlid": 65519, 00:18:09.420 "ana_reporting": false 00:18:09.420 } 00:18:09.420 }, 00:18:09.420 { 00:18:09.420 "method": "nvmf_subsystem_add_host", 00:18:09.420 "params": { 00:18:09.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.420 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.420 "psk": "/tmp/tmp.5Q5vkkqPe8" 00:18:09.420 } 00:18:09.420 }, 00:18:09.420 { 00:18:09.420 "method": "nvmf_subsystem_add_ns", 00:18:09.420 "params": { 00:18:09.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.420 "namespace": { 00:18:09.420 "nsid": 1, 00:18:09.420 "bdev_name": "malloc0", 00:18:09.420 "nguid": "CB3FEEA97996429F81997CB2B1776D8B", 00:18:09.420 "uuid": "cb3feea9-7996-429f-8199-7cb2b1776d8b", 00:18:09.420 "no_auto_visible": false 00:18:09.420 } 00:18:09.420 } 00:18:09.420 }, 00:18:09.420 { 00:18:09.420 "method": "nvmf_subsystem_add_listener", 00:18:09.420 "params": { 00:18:09.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.420 "listen_address": { 00:18:09.420 "trtype": "TCP", 00:18:09.420 "adrfam": "IPv4", 00:18:09.420 "traddr": "10.0.0.2", 00:18:09.420 "trsvcid": "4420" 00:18:09.420 }, 00:18:09.420 "secure_channel": true 00:18:09.420 } 00:18:09.420 } 00:18:09.420 ] 00:18:09.420 } 00:18:09.420 ] 00:18:09.420 }' 00:18:09.420 04:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:09.986 04:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:09.986 "subsystems": [ 00:18:09.986 { 00:18:09.986 "subsystem": "keyring", 00:18:09.986 "config": [] 00:18:09.986 }, 00:18:09.986 { 00:18:09.986 "subsystem": "iobuf", 00:18:09.986 "config": [ 00:18:09.986 { 00:18:09.986 "method": "iobuf_set_options", 00:18:09.986 "params": { 00:18:09.986 "small_pool_count": 8192, 00:18:09.986 "large_pool_count": 1024, 00:18:09.986 "small_bufsize": 8192, 00:18:09.986 "large_bufsize": 135168 00:18:09.986 } 00:18:09.986 } 00:18:09.986 ] 00:18:09.986 }, 00:18:09.986 { 00:18:09.986 "subsystem": "sock", 00:18:09.986 "config": [ 00:18:09.986 { 00:18:09.986 "method": "sock_impl_set_options", 00:18:09.986 "params": { 00:18:09.986 "impl_name": "posix", 00:18:09.986 "recv_buf_size": 2097152, 00:18:09.986 "send_buf_size": 2097152, 00:18:09.986 "enable_recv_pipe": true, 00:18:09.986 "enable_quickack": false, 00:18:09.986 "enable_placement_id": 0, 00:18:09.986 "enable_zerocopy_send_server": true, 00:18:09.986 "enable_zerocopy_send_client": false, 00:18:09.986 "zerocopy_threshold": 0, 00:18:09.986 "tls_version": 0, 00:18:09.986 "enable_ktls": false 00:18:09.986 } 00:18:09.986 }, 00:18:09.986 { 00:18:09.986 "method": "sock_impl_set_options", 00:18:09.986 "params": { 00:18:09.986 "impl_name": "ssl", 00:18:09.986 "recv_buf_size": 4096, 00:18:09.986 "send_buf_size": 4096, 00:18:09.986 "enable_recv_pipe": true, 00:18:09.986 "enable_quickack": false, 00:18:09.986 "enable_placement_id": 0, 00:18:09.986 "enable_zerocopy_send_server": true, 00:18:09.986 "enable_zerocopy_send_client": false, 00:18:09.986 "zerocopy_threshold": 0, 00:18:09.986 "tls_version": 0, 00:18:09.986 "enable_ktls": false 00:18:09.986 } 00:18:09.986 } 00:18:09.986 ] 00:18:09.986 }, 00:18:09.986 { 00:18:09.986 "subsystem": "vmd", 00:18:09.986 "config": [] 00:18:09.986 }, 00:18:09.986 { 00:18:09.986 "subsystem": "accel", 00:18:09.986 "config": [ 00:18:09.986 { 00:18:09.986 "method": "accel_set_options", 00:18:09.986 "params": { 00:18:09.987 "small_cache_size": 128, 00:18:09.987 "large_cache_size": 16, 00:18:09.987 "task_count": 2048, 00:18:09.987 "sequence_count": 2048, 00:18:09.987 "buf_count": 2048 00:18:09.987 } 00:18:09.987 } 00:18:09.987 ] 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "subsystem": "bdev", 00:18:09.987 "config": [ 00:18:09.987 { 00:18:09.987 "method": "bdev_set_options", 00:18:09.987 "params": { 00:18:09.987 "bdev_io_pool_size": 65535, 00:18:09.987 "bdev_io_cache_size": 256, 00:18:09.987 "bdev_auto_examine": true, 00:18:09.987 "iobuf_small_cache_size": 128, 00:18:09.987 "iobuf_large_cache_size": 16 00:18:09.987 } 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "method": "bdev_raid_set_options", 00:18:09.987 "params": { 00:18:09.987 "process_window_size_kb": 1024 00:18:09.987 } 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "method": "bdev_iscsi_set_options", 00:18:09.987 "params": { 00:18:09.987 "timeout_sec": 30 00:18:09.987 } 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "method": "bdev_nvme_set_options", 00:18:09.987 "params": { 00:18:09.987 "action_on_timeout": "none", 00:18:09.987 "timeout_us": 0, 00:18:09.987 "timeout_admin_us": 0, 00:18:09.987 "keep_alive_timeout_ms": 10000, 00:18:09.987 "arbitration_burst": 0, 00:18:09.987 "low_priority_weight": 0, 00:18:09.987 "medium_priority_weight": 0, 00:18:09.987 "high_priority_weight": 0, 00:18:09.987 "nvme_adminq_poll_period_us": 10000, 00:18:09.987 "nvme_ioq_poll_period_us": 0, 00:18:09.987 "io_queue_requests": 512, 00:18:09.987 "delay_cmd_submit": true, 00:18:09.987 "transport_retry_count": 4, 00:18:09.987 "bdev_retry_count": 3, 00:18:09.987 "transport_ack_timeout": 0, 00:18:09.987 "ctrlr_loss_timeout_sec": 0, 00:18:09.987 "reconnect_delay_sec": 0, 00:18:09.987 "fast_io_fail_timeout_sec": 0, 00:18:09.987 "disable_auto_failback": false, 00:18:09.987 "generate_uuids": false, 00:18:09.987 "transport_tos": 0, 00:18:09.987 "nvme_error_stat": false, 00:18:09.987 "rdma_srq_size": 0, 00:18:09.987 "io_path_stat": false, 00:18:09.987 "allow_accel_sequence": false, 00:18:09.987 "rdma_max_cq_size": 0, 00:18:09.987 "rdma_cm_event_timeout_ms": 0, 00:18:09.987 "dhchap_digests": [ 00:18:09.987 "sha256", 00:18:09.987 "sha384", 00:18:09.987 "sha512" 00:18:09.987 ], 00:18:09.987 "dhchap_dhgroups": [ 00:18:09.987 "null", 00:18:09.987 "ffdhe2048", 00:18:09.987 "ffdhe3072", 00:18:09.987 "ffdhe4096", 00:18:09.987 "ffdhe6144", 00:18:09.987 "ffdhe8192" 00:18:09.987 ] 00:18:09.987 } 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "method": "bdev_nvme_attach_controller", 00:18:09.987 "params": { 00:18:09.987 "name": "TLSTEST", 00:18:09.987 "trtype": "TCP", 00:18:09.987 "adrfam": "IPv4", 00:18:09.987 "traddr": "10.0.0.2", 00:18:09.987 "trsvcid": "4420", 00:18:09.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.987 "prchk_reftag": false, 00:18:09.987 "prchk_guard": false, 00:18:09.987 "ctrlr_loss_timeout_sec": 0, 00:18:09.987 "reconnect_delay_sec": 0, 00:18:09.987 "fast_io_fail_timeout_sec": 0, 00:18:09.987 "psk": "/tmp/tmp.5Q5vkkqPe8", 00:18:09.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.987 "hdgst": false, 00:18:09.987 "ddgst": false 00:18:09.987 } 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "method": "bdev_nvme_set_hotplug", 00:18:09.987 "params": { 00:18:09.987 "period_us": 100000, 00:18:09.987 "enable": false 00:18:09.987 } 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "method": "bdev_wait_for_examine" 00:18:09.987 } 00:18:09.987 ] 00:18:09.987 }, 00:18:09.987 { 00:18:09.987 "subsystem": "nbd", 00:18:09.987 "config": [] 00:18:09.987 } 00:18:09.987 ] 00:18:09.987 }' 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3403114 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3403114 ']' 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3403114 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3403114 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3403114' 00:18:09.987 killing process with pid 3403114 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3403114 00:18:09.987 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.987 00:18:09.987 Latency(us) 00:18:09.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.987 =================================================================================================================== 00:18:09.987 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.987 [2024-05-15 04:18:57.786416] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.987 04:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3403114 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3402831 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3402831 ']' 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3402831 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3402831 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3402831' 00:18:10.246 killing process with pid 3402831 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3402831 00:18:10.246 [2024-05-15 04:18:58.060894] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:10.246 [2024-05-15 04:18:58.060953] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:10.246 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3402831 00:18:10.505 04:18:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:10.505 04:18:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.505 04:18:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:10.505 "subsystems": [ 00:18:10.505 { 00:18:10.505 "subsystem": "keyring", 00:18:10.505 "config": [] 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "subsystem": "iobuf", 00:18:10.505 "config": [ 00:18:10.505 { 00:18:10.505 "method": "iobuf_set_options", 00:18:10.505 "params": { 00:18:10.505 "small_pool_count": 8192, 00:18:10.505 "large_pool_count": 1024, 00:18:10.505 "small_bufsize": 8192, 00:18:10.505 "large_bufsize": 135168 00:18:10.505 } 00:18:10.505 } 00:18:10.505 ] 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "subsystem": "sock", 00:18:10.505 "config": [ 00:18:10.505 { 00:18:10.505 "method": "sock_impl_set_options", 00:18:10.505 "params": { 00:18:10.505 "impl_name": "posix", 00:18:10.505 "recv_buf_size": 2097152, 00:18:10.505 "send_buf_size": 2097152, 00:18:10.505 "enable_recv_pipe": true, 00:18:10.505 "enable_quickack": false, 00:18:10.505 "enable_placement_id": 0, 00:18:10.505 "enable_zerocopy_send_server": true, 00:18:10.505 "enable_zerocopy_send_client": false, 00:18:10.505 "zerocopy_threshold": 0, 00:18:10.505 "tls_version": 0, 00:18:10.505 "enable_ktls": false 00:18:10.505 } 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "method": "sock_impl_set_options", 00:18:10.505 "params": { 00:18:10.505 "impl_name": "ssl", 00:18:10.505 "recv_buf_size": 4096, 00:18:10.505 "send_buf_size": 4096, 00:18:10.505 "enable_recv_pipe": true, 00:18:10.505 "enable_quickack": false, 00:18:10.505 "enable_placement_id": 0, 00:18:10.505 "enable_zerocopy_send_server": true, 00:18:10.505 "enable_zerocopy_send_client": false, 00:18:10.505 "zerocopy_threshold": 0, 00:18:10.505 "tls_version": 0, 00:18:10.505 "enable_ktls": false 00:18:10.505 } 00:18:10.505 } 00:18:10.505 ] 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "subsystem": "vmd", 00:18:10.505 "config": [] 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "subsystem": "accel", 00:18:10.505 "config": [ 00:18:10.505 { 00:18:10.505 "method": "accel_set_options", 00:18:10.505 "params": { 00:18:10.505 "small_cache_size": 128, 00:18:10.505 "large_cache_size": 16, 00:18:10.505 "task_count": 2048, 00:18:10.505 "sequence_count": 2048, 00:18:10.505 "buf_count": 2048 00:18:10.505 } 00:18:10.505 } 00:18:10.505 ] 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "subsystem": "bdev", 00:18:10.505 "config": [ 00:18:10.505 { 00:18:10.505 "method": "bdev_set_options", 00:18:10.505 "params": { 00:18:10.505 "bdev_io_pool_size": 65535, 00:18:10.505 "bdev_io_cache_size": 256, 00:18:10.505 "bdev_auto_examine": true, 00:18:10.505 "iobuf_small_cache_size": 128, 00:18:10.505 "iobuf_large_cache_size": 16 00:18:10.505 } 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "method": "bdev_raid_set_options", 00:18:10.505 "params": { 00:18:10.505 "process_window_size_kb": 1024 00:18:10.505 } 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "method": "bdev_iscsi_set_options", 00:18:10.505 "params": { 00:18:10.505 "timeout_sec": 30 00:18:10.505 } 00:18:10.505 }, 00:18:10.505 { 00:18:10.505 "method": "bdev_nvme_set_options", 00:18:10.505 "params": { 00:18:10.505 "action_on_timeout": "none", 00:18:10.505 "timeout_us": 0, 00:18:10.505 "timeout_admin_us": 0, 00:18:10.505 "keep_alive_timeout_ms": 10000, 00:18:10.505 "arbitration_burst": 0, 00:18:10.505 "low_priority_weight": 0, 00:18:10.505 "medium_priority_weight": 0, 00:18:10.505 "high_priority_weight": 0, 00:18:10.505 "nvme_adminq_poll_period_us": 10000, 00:18:10.505 "nvme_ioq_poll_period_us": 0, 00:18:10.505 "io_queue_requests": 0, 00:18:10.505 "delay_cmd_submit": true, 00:18:10.505 "transport_retry_count": 4, 00:18:10.505 "bdev_retry_count": 3, 00:18:10.505 "transport_ack_timeout": 0, 00:18:10.506 "ctrlr_loss_timeout_sec": 0, 00:18:10.506 "reconnect_delay_sec": 0, 00:18:10.506 "fast_io_fail_timeout_sec": 0, 00:18:10.506 "disable_auto_failback": false, 00:18:10.506 "generate_uuids": false, 00:18:10.506 "transport_tos": 0, 00:18:10.506 "nvme_error_stat": false, 00:18:10.506 "rdma_srq_size": 0, 00:18:10.506 "io_path_stat": false, 00:18:10.506 "allow_accel_sequence": false, 00:18:10.506 "rdma_max_cq_size": 0, 00:18:10.506 "rdma_cm_event_timeout_ms": 0, 00:18:10.506 "dhchap_digests": [ 00:18:10.506 "sha256", 00:18:10.506 "sha384", 00:18:10.506 "sha512" 00:18:10.506 ], 00:18:10.506 "dhchap_dhgroups": [ 00:18:10.506 "null", 00:18:10.506 "ffdhe2048", 00:18:10.506 "ffdhe3072", 00:18:10.506 "ffdhe4096", 00:18:10.506 "ffdhe6144", 00:18:10.506 "ffdhe8192" 00:18:10.506 ] 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "bdev_nvme_set_hotplug", 00:18:10.506 "params": { 00:18:10.506 "period_us": 100000, 00:18:10.506 "enable": false 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "bdev_malloc_create", 00:18:10.506 "params": { 00:18:10.506 "name": "malloc0", 00:18:10.506 "num_blocks": 8192, 00:18:10.506 "block_size": 4096, 00:18:10.506 "physical_block_size": 4096, 00:18:10.506 "uuid": "cb3feea9-7996-429f-8199-7cb2b1776d8b", 00:18:10.506 "optimal_io_boundary": 0 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "bdev_wait_for_examine" 00:18:10.506 } 00:18:10.506 ] 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "subsystem": "nbd", 00:18:10.506 "config": [] 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "subsystem": "scheduler", 00:18:10.506 "config": [ 00:18:10.506 { 00:18:10.506 "method": "framework_set_scheduler", 00:18:10.506 "params": { 00:18:10.506 "name": "static" 00:18:10.506 } 00:18:10.506 } 00:18:10.506 ] 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "subsystem": "nvmf", 00:18:10.506 "config": [ 00:18:10.506 { 00:18:10.506 "method": "nvmf_set_config", 00:18:10.506 "params": { 00:18:10.506 "discovery_filter": "match_any", 00:18:10.506 "admin_cmd_passthru": { 00:18:10.506 "identify_ctrlr": false 00:18:10.506 } 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "nvmf_set_max_subsystems", 00:18:10.506 "params": { 00:18:10.506 "max_subsystems": 1024 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "nvmf_set_crdt", 00:18:10.506 "params": { 00:18:10.506 "crdt1": 0, 00:18:10.506 "crdt2": 0, 00:18:10.506 "crdt3": 0 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "nvmf_create_transport", 00:18:10.506 "params": { 00:18:10.506 "trtype": "TCP", 00:18:10.506 "max_queue_depth": 128, 00:18:10.506 "max_io_qpairs_per_ctrlr": 127, 00:18:10.506 "in_capsule_data_size": 4096, 00:18:10.506 "max_io_size": 131072, 00:18:10.506 "io_unit_size": 131072, 00:18:10.506 "max_aq_depth": 128, 00:18:10.506 "num_shared_buffers": 511, 00:18:10.506 "buf_cache_size": 4294967295, 00:18:10.506 "dif_insert_or_strip": false, 00:18:10.506 "zcopy": false, 00:18:10.506 "c2h_success": false, 00:18:10.506 "sock_priority": 0, 00:18:10.506 "abort_timeout_sec": 1, 00:18:10.506 "ack_timeout": 0, 00:18:10.506 "data_wr_pool_size": 0 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "nvmf_create_subsystem", 00:18:10.506 "params": { 00:18:10.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.506 "allow_any_host": false, 00:18:10.506 "serial_number": "SPDK00000000000001", 00:18:10.506 "model_number": "SPDK bdev Controller", 00:18:10.506 "max_namespaces": 10, 00:18:10.506 "min_cntlid": 1, 00:18:10.506 "max_cntlid": 65519, 00:18:10.506 "ana_reporting": false 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "nvmf_subsystem_add_host", 00:18:10.506 "params": { 00:18:10.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.506 "host": "nqn.2016-06.io.spdk:host1", 00:18:10.506 "psk": "/tmp/tmp.5Q5vkkqPe8" 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "nvmf_subsystem_add_ns", 00:18:10.506 "params": { 00:18:10.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.506 "namespace": { 00:18:10.506 "nsid": 1, 00:18:10.506 "bdev_name": "malloc0", 00:18:10.506 "nguid": "CB3FEEA97996429F81997CB2B1776D8B", 00:18:10.506 "uuid": "cb3feea9-7996-429f-8199-7cb2b1776d8b", 00:18:10.506 "no_auto_visible": false 00:18:10.506 } 00:18:10.506 } 00:18:10.506 }, 00:18:10.506 { 00:18:10.506 "method": "nvmf_subsystem_add_listener", 00:18:10.506 "params": { 00:18:10.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.506 "listen_address": { 00:18:10.506 "trtype": "TCP", 00:18:10.506 "adrfam": "IPv4", 00:18:10.506 "traddr": "10.0.0.2", 00:18:10.506 "trsvcid": "4420" 00:18:10.506 }, 00:18:10.506 "secure_channel": true 00:18:10.506 } 00:18:10.506 } 00:18:10.506 ] 00:18:10.506 } 00:18:10.506 ] 00:18:10.506 }' 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3403390 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3403390 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3403390 ']' 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:10.506 04:18:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.506 [2024-05-15 04:18:58.398398] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:10.506 [2024-05-15 04:18:58.398475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.506 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.506 [2024-05-15 04:18:58.474522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.765 [2024-05-15 04:18:58.585575] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.765 [2024-05-15 04:18:58.585636] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.765 [2024-05-15 04:18:58.585664] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.765 [2024-05-15 04:18:58.585676] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.765 [2024-05-15 04:18:58.585686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.765 [2024-05-15 04:18:58.585765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.023 [2024-05-15 04:18:58.818778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.023 [2024-05-15 04:18:58.834719] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:11.023 [2024-05-15 04:18:58.850739] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:11.023 [2024-05-15 04:18:58.850823] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.023 [2024-05-15 04:18:58.859155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3403542 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3403542 /var/tmp/bdevperf.sock 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3403542 ']' 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:11.589 "subsystems": [ 00:18:11.589 { 00:18:11.589 "subsystem": "keyring", 00:18:11.589 "config": [] 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "subsystem": "iobuf", 00:18:11.589 "config": [ 00:18:11.589 { 00:18:11.589 "method": "iobuf_set_options", 00:18:11.589 "params": { 00:18:11.589 "small_pool_count": 8192, 00:18:11.589 "large_pool_count": 1024, 00:18:11.589 "small_bufsize": 8192, 00:18:11.589 "large_bufsize": 135168 00:18:11.589 } 00:18:11.589 } 00:18:11.589 ] 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "subsystem": "sock", 00:18:11.589 "config": [ 00:18:11.589 { 00:18:11.589 "method": "sock_impl_set_options", 00:18:11.589 "params": { 00:18:11.589 "impl_name": "posix", 00:18:11.589 "recv_buf_size": 2097152, 00:18:11.589 "send_buf_size": 2097152, 00:18:11.589 "enable_recv_pipe": true, 00:18:11.589 "enable_quickack": false, 00:18:11.589 "enable_placement_id": 0, 00:18:11.589 "enable_zerocopy_send_server": true, 00:18:11.589 "enable_zerocopy_send_client": false, 00:18:11.589 "zerocopy_threshold": 0, 00:18:11.589 "tls_version": 0, 00:18:11.589 "enable_ktls": false 00:18:11.589 } 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "method": "sock_impl_set_options", 00:18:11.589 "params": { 00:18:11.589 "impl_name": "ssl", 00:18:11.589 "recv_buf_size": 4096, 00:18:11.589 "send_buf_size": 4096, 00:18:11.589 "enable_recv_pipe": true, 00:18:11.589 "enable_quickack": false, 00:18:11.589 "enable_placement_id": 0, 00:18:11.589 "enable_zerocopy_send_server": true, 00:18:11.589 "enable_zerocopy_send_client": false, 00:18:11.589 "zerocopy_threshold": 0, 00:18:11.589 "tls_version": 0, 00:18:11.589 "enable_ktls": false 00:18:11.589 } 00:18:11.589 } 00:18:11.589 ] 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "subsystem": "vmd", 00:18:11.589 "config": [] 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "subsystem": "accel", 00:18:11.589 "config": [ 00:18:11.589 { 00:18:11.589 "method": "accel_set_options", 00:18:11.589 "params": { 00:18:11.589 "small_cache_size": 128, 00:18:11.589 "large_cache_size": 16, 00:18:11.589 "task_count": 2048, 00:18:11.589 "sequence_count": 2048, 00:18:11.589 "buf_count": 2048 00:18:11.589 } 00:18:11.589 } 00:18:11.589 ] 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "subsystem": "bdev", 00:18:11.589 "config": [ 00:18:11.589 { 00:18:11.589 "method": "bdev_set_options", 00:18:11.589 "params": { 00:18:11.589 "bdev_io_pool_size": 65535, 00:18:11.589 "bdev_io_cache_size": 256, 00:18:11.589 "bdev_auto_examine": true, 00:18:11.589 "iobuf_small_cache_size": 128, 00:18:11.589 "iobuf_large_cache_size": 16 00:18:11.589 } 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "method": "bdev_raid_set_options", 00:18:11.589 "params": { 00:18:11.589 "process_window_size_kb": 1024 00:18:11.589 } 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "method": "bdev_iscsi_set_options", 00:18:11.589 "params": { 00:18:11.589 "timeout_sec": 30 00:18:11.589 } 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "method": "bdev_nvme_set_options", 00:18:11.589 "params": { 00:18:11.589 "action_on_timeout": "none", 00:18:11.589 "timeout_us": 0, 00:18:11.589 "timeout_admin_us": 0, 00:18:11.589 "keep_alive_timeout_ms": 10000, 00:18:11.589 "arbitration_burst": 0, 00:18:11.589 "low_priority_weight": 0, 00:18:11.589 "medium_priority_weight": 0, 00:18:11.589 "high_priority_weight": 0, 00:18:11.589 "nvme_adminq_poll_period_us": 10000, 00:18:11.589 "nvme_ioq_poll_period_us": 0, 00:18:11.589 "io_queue_requests": 512, 00:18:11.589 "delay_cmd_submit": true, 00:18:11.589 "transport_retry_count": 4, 00:18:11.589 "bdev_retry_count": 3, 00:18:11.589 "transport_ack_timeout": 0, 00:18:11.589 "ctrlr_loss_timeout_sec": 0, 00:18:11.589 "reconnect_delay_sec": 0, 00:18:11.589 "fast_io_fail_timeout_sec": 0, 00:18:11.589 "disable_auto_failback": false, 00:18:11.589 "generate_uuids": false, 00:18:11.589 "transport_tos": 0, 00:18:11.589 "nvme_error_stat": false, 00:18:11.589 "rdma_srq_size": 0, 00:18:11.589 "io_path_stat": false, 00:18:11.589 "allow_accel_sequence": false, 00:18:11.589 "rdma_max_cq_size": 0, 00:18:11.589 "rdma_cm_event_timeout_ms": 0, 00:18:11.589 "dhchap_digests": [ 00:18:11.589 "sha256", 00:18:11.589 "sha384", 00:18:11.589 "sha512" 00:18:11.589 ], 00:18:11.589 "dhchap_dhgroups": [ 00:18:11.589 "null", 00:18:11.589 "ffdhe2048", 00:18:11.589 "ffdhe3072", 00:18:11.589 "ffdhe4096", 00:18:11.589 "ffdhe6144", 00:18:11.589 "ffdhe8192" 00:18:11.589 ] 00:18:11.589 } 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "method": "bdev_nvme_attach_controller", 00:18:11.589 "params": { 00:18:11.589 "name": "TLSTEST", 00:18:11.589 "trtype": "TCP", 00:18:11.589 "adrfam": "IPv4", 00:18:11.589 "traddr": "10.0.0.2", 00:18:11.589 "trsvcid": "4420", 00:18:11.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.589 "prchk_reftag": false, 00:18:11.589 "prchk_guard": false, 00:18:11.589 "ctrlr_loss_timeout_sec": 0, 00:18:11.589 "reconnect_delay_sec": 0, 00:18:11.589 "fast_io_fail_timeout_sec": 0, 00:18:11.589 "psk": "/tmp/tmp.5Q5vkkqPe8", 00:18:11.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.589 "hdgst": false, 00:18:11.589 "ddgst": false 00:18:11.589 } 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "method": "bdev_nvme_set_hotplug", 00:18:11.589 "params": { 00:18:11.589 "period_us": 100000, 00:18:11.589 "enable": false 00:18:11.589 } 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "method": "bdev_wait_for_examine" 00:18:11.589 } 00:18:11.589 ] 00:18:11.589 }, 00:18:11.589 { 00:18:11.589 "subsystem": "nbd", 00:18:11.589 "config": [] 00:18:11.589 } 00:18:11.589 ] 00:18:11.589 }' 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.589 04:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.589 [2024-05-15 04:18:59.419557] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:11.589 [2024-05-15 04:18:59.419633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403542 ] 00:18:11.589 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.589 [2024-05-15 04:18:59.488645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.589 [2024-05-15 04:18:59.597267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.847 [2024-05-15 04:18:59.762722] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.847 [2024-05-15 04:18:59.762868] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:12.413 04:19:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.413 04:19:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:12.413 04:19:00 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:12.671 Running I/O for 10 seconds... 00:18:22.638 00:18:22.638 Latency(us) 00:18:22.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.638 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.638 Verification LBA range: start 0x0 length 0x2000 00:18:22.638 TLSTESTn1 : 10.07 1044.85 4.08 0.00 0.00 122243.79 11505.21 135926.52 00:18:22.638 =================================================================================================================== 00:18:22.638 Total : 1044.85 4.08 0.00 0.00 122243.79 11505.21 135926.52 00:18:22.638 0 00:18:22.638 04:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.638 04:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3403542 00:18:22.638 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3403542 ']' 00:18:22.638 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3403542 00:18:22.638 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:22.638 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:22.638 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3403542 00:18:22.896 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:22.896 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:22.896 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3403542' 00:18:22.896 killing process with pid 3403542 00:18:22.896 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3403542 00:18:22.896 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.896 00:18:22.896 Latency(us) 00:18:22.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.896 =================================================================================================================== 00:18:22.896 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.896 [2024-05-15 04:19:10.666193] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:22.896 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3403542 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3403390 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3403390 ']' 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3403390 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3403390 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3403390' 00:18:23.155 killing process with pid 3403390 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3403390 00:18:23.155 [2024-05-15 04:19:10.960665] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:23.155 [2024-05-15 04:19:10.960724] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:23.155 04:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3403390 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3404870 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3404870 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3404870 ']' 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:23.413 04:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.413 [2024-05-15 04:19:11.300173] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:23.414 [2024-05-15 04:19:11.300266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.414 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.414 [2024-05-15 04:19:11.380421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.672 [2024-05-15 04:19:11.496771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.672 [2024-05-15 04:19:11.496846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.672 [2024-05-15 04:19:11.496863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.672 [2024-05-15 04:19:11.496876] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.672 [2024-05-15 04:19:11.496888] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.672 [2024-05-15 04:19:11.496946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.5Q5vkkqPe8 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5Q5vkkqPe8 00:18:24.236 04:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:24.494 [2024-05-15 04:19:12.469755] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.494 04:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:24.752 04:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:25.010 [2024-05-15 04:19:12.959023] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:25.010 [2024-05-15 04:19:12.959113] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.010 [2024-05-15 04:19:12.959336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.010 04:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.268 malloc0 00:18:25.268 04:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.527 04:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Q5vkkqPe8 00:18:25.785 [2024-05-15 04:19:13.707716] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3405170 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3405170 /var/tmp/bdevperf.sock 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3405170 ']' 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.785 04:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 [2024-05-15 04:19:13.771066] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:25.785 [2024-05-15 04:19:13.771141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405170 ] 00:18:26.043 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.043 [2024-05-15 04:19:13.848761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.043 [2024-05-15 04:19:13.964469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.976 04:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:26.976 04:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:26.976 04:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5Q5vkkqPe8 00:18:26.976 04:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:27.233 [2024-05-15 04:19:15.202359] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.491 nvme0n1 00:18:27.491 04:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:27.491 Running I/O for 1 seconds... 00:18:28.862 00:18:28.862 Latency(us) 00:18:28.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.862 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:28.862 Verification LBA range: start 0x0 length 0x2000 00:18:28.862 nvme0n1 : 1.08 1353.78 5.29 0.00 0.00 91756.99 6456.51 141363.58 00:18:28.862 =================================================================================================================== 00:18:28.862 Total : 1353.78 5.29 0.00 0.00 91756.99 6456.51 141363.58 00:18:28.862 0 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3405170 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3405170 ']' 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3405170 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3405170 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3405170' 00:18:28.862 killing process with pid 3405170 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3405170 00:18:28.862 Received shutdown signal, test time was about 1.000000 seconds 00:18:28.862 00:18:28.862 Latency(us) 00:18:28.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.862 =================================================================================================================== 00:18:28.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3405170 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3404870 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3404870 ']' 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3404870 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3404870 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3404870' 00:18:28.862 killing process with pid 3404870 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3404870 00:18:28.862 [2024-05-15 04:19:16.784492] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:28.862 04:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3404870 00:18:28.862 [2024-05-15 04:19:16.784553] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3405576 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3405576 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3405576 ']' 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.119 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.120 04:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.120 [2024-05-15 04:19:17.111538] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:29.120 [2024-05-15 04:19:17.111629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.377 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.377 [2024-05-15 04:19:17.194830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.377 [2024-05-15 04:19:17.311704] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.377 [2024-05-15 04:19:17.311780] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.377 [2024-05-15 04:19:17.311797] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.377 [2024-05-15 04:19:17.311810] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.377 [2024-05-15 04:19:17.311822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.378 [2024-05-15 04:19:17.311864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.340 [2024-05-15 04:19:18.104581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.340 malloc0 00:18:30.340 [2024-05-15 04:19:18.136179] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:30.340 [2024-05-15 04:19:18.136292] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.340 [2024-05-15 04:19:18.136526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3405727 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3405727 /var/tmp/bdevperf.sock 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3405727 ']' 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:30.340 04:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.340 [2024-05-15 04:19:18.205010] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:30.340 [2024-05-15 04:19:18.205085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405727 ] 00:18:30.340 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.340 [2024-05-15 04:19:18.278527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.598 [2024-05-15 04:19:18.406447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.163 04:19:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:31.163 04:19:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:31.163 04:19:19 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5Q5vkkqPe8 00:18:31.420 04:19:19 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:31.677 [2024-05-15 04:19:19.633313] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.936 nvme0n1 00:18:31.936 04:19:19 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:31.936 Running I/O for 1 seconds... 00:18:33.310 00:18:33.310 Latency(us) 00:18:33.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.310 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.310 Verification LBA range: start 0x0 length 0x2000 00:18:33.310 nvme0n1 : 1.08 1322.41 5.17 0.00 0.00 93936.32 7524.50 148354.09 00:18:33.310 =================================================================================================================== 00:18:33.310 Total : 1322.41 5.17 0.00 0.00 93936.32 7524.50 148354.09 00:18:33.310 0 00:18:33.310 04:19:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:33.310 04:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.310 04:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.310 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.310 04:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:33.310 "subsystems": [ 00:18:33.310 { 00:18:33.310 "subsystem": "keyring", 00:18:33.310 "config": [ 00:18:33.310 { 00:18:33.310 "method": "keyring_file_add_key", 00:18:33.310 "params": { 00:18:33.310 "name": "key0", 00:18:33.310 "path": "/tmp/tmp.5Q5vkkqPe8" 00:18:33.310 } 00:18:33.310 } 00:18:33.310 ] 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "subsystem": "iobuf", 00:18:33.310 "config": [ 00:18:33.310 { 00:18:33.310 "method": "iobuf_set_options", 00:18:33.310 "params": { 00:18:33.310 "small_pool_count": 8192, 00:18:33.310 "large_pool_count": 1024, 00:18:33.310 "small_bufsize": 8192, 00:18:33.310 "large_bufsize": 135168 00:18:33.310 } 00:18:33.310 } 00:18:33.310 ] 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "subsystem": "sock", 00:18:33.310 "config": [ 00:18:33.310 { 00:18:33.310 "method": "sock_impl_set_options", 00:18:33.310 "params": { 00:18:33.310 "impl_name": "posix", 00:18:33.310 "recv_buf_size": 2097152, 00:18:33.310 "send_buf_size": 2097152, 00:18:33.310 "enable_recv_pipe": true, 00:18:33.310 "enable_quickack": false, 00:18:33.310 "enable_placement_id": 0, 00:18:33.310 "enable_zerocopy_send_server": true, 00:18:33.310 "enable_zerocopy_send_client": false, 00:18:33.310 "zerocopy_threshold": 0, 00:18:33.310 "tls_version": 0, 00:18:33.310 "enable_ktls": false 00:18:33.310 } 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "method": "sock_impl_set_options", 00:18:33.310 "params": { 00:18:33.310 "impl_name": "ssl", 00:18:33.310 "recv_buf_size": 4096, 00:18:33.310 "send_buf_size": 4096, 00:18:33.310 "enable_recv_pipe": true, 00:18:33.310 "enable_quickack": false, 00:18:33.310 "enable_placement_id": 0, 00:18:33.310 "enable_zerocopy_send_server": true, 00:18:33.310 "enable_zerocopy_send_client": false, 00:18:33.310 "zerocopy_threshold": 0, 00:18:33.310 "tls_version": 0, 00:18:33.310 "enable_ktls": false 00:18:33.310 } 00:18:33.310 } 00:18:33.310 ] 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "subsystem": "vmd", 00:18:33.310 "config": [] 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "subsystem": "accel", 00:18:33.310 "config": [ 00:18:33.310 { 00:18:33.310 "method": "accel_set_options", 00:18:33.310 "params": { 00:18:33.310 "small_cache_size": 128, 00:18:33.310 "large_cache_size": 16, 00:18:33.310 "task_count": 2048, 00:18:33.310 "sequence_count": 2048, 00:18:33.310 "buf_count": 2048 00:18:33.310 } 00:18:33.310 } 00:18:33.310 ] 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "subsystem": "bdev", 00:18:33.310 "config": [ 00:18:33.310 { 00:18:33.310 "method": "bdev_set_options", 00:18:33.310 "params": { 00:18:33.310 "bdev_io_pool_size": 65535, 00:18:33.310 "bdev_io_cache_size": 256, 00:18:33.310 "bdev_auto_examine": true, 00:18:33.310 "iobuf_small_cache_size": 128, 00:18:33.310 "iobuf_large_cache_size": 16 00:18:33.310 } 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "method": "bdev_raid_set_options", 00:18:33.310 "params": { 00:18:33.310 "process_window_size_kb": 1024 00:18:33.310 } 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "method": "bdev_iscsi_set_options", 00:18:33.310 "params": { 00:18:33.310 "timeout_sec": 30 00:18:33.310 } 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "method": "bdev_nvme_set_options", 00:18:33.310 "params": { 00:18:33.310 "action_on_timeout": "none", 00:18:33.310 "timeout_us": 0, 00:18:33.310 "timeout_admin_us": 0, 00:18:33.310 "keep_alive_timeout_ms": 10000, 00:18:33.310 "arbitration_burst": 0, 00:18:33.311 "low_priority_weight": 0, 00:18:33.311 "medium_priority_weight": 0, 00:18:33.311 "high_priority_weight": 0, 00:18:33.311 "nvme_adminq_poll_period_us": 10000, 00:18:33.311 "nvme_ioq_poll_period_us": 0, 00:18:33.311 "io_queue_requests": 0, 00:18:33.311 "delay_cmd_submit": true, 00:18:33.311 "transport_retry_count": 4, 00:18:33.311 "bdev_retry_count": 3, 00:18:33.311 "transport_ack_timeout": 0, 00:18:33.311 "ctrlr_loss_timeout_sec": 0, 00:18:33.311 "reconnect_delay_sec": 0, 00:18:33.311 "fast_io_fail_timeout_sec": 0, 00:18:33.311 "disable_auto_failback": false, 00:18:33.311 "generate_uuids": false, 00:18:33.311 "transport_tos": 0, 00:18:33.311 "nvme_error_stat": false, 00:18:33.311 "rdma_srq_size": 0, 00:18:33.311 "io_path_stat": false, 00:18:33.311 "allow_accel_sequence": false, 00:18:33.311 "rdma_max_cq_size": 0, 00:18:33.311 "rdma_cm_event_timeout_ms": 0, 00:18:33.311 "dhchap_digests": [ 00:18:33.311 "sha256", 00:18:33.311 "sha384", 00:18:33.311 "sha512" 00:18:33.311 ], 00:18:33.311 "dhchap_dhgroups": [ 00:18:33.311 "null", 00:18:33.311 "ffdhe2048", 00:18:33.311 "ffdhe3072", 00:18:33.311 "ffdhe4096", 00:18:33.311 "ffdhe6144", 00:18:33.311 "ffdhe8192" 00:18:33.311 ] 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "bdev_nvme_set_hotplug", 00:18:33.311 "params": { 00:18:33.311 "period_us": 100000, 00:18:33.311 "enable": false 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "bdev_malloc_create", 00:18:33.311 "params": { 00:18:33.311 "name": "malloc0", 00:18:33.311 "num_blocks": 8192, 00:18:33.311 "block_size": 4096, 00:18:33.311 "physical_block_size": 4096, 00:18:33.311 "uuid": "b168fc12-a234-448b-a2a4-59bfdf02f95f", 00:18:33.311 "optimal_io_boundary": 0 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "bdev_wait_for_examine" 00:18:33.311 } 00:18:33.311 ] 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "subsystem": "nbd", 00:18:33.311 "config": [] 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "subsystem": "scheduler", 00:18:33.311 "config": [ 00:18:33.311 { 00:18:33.311 "method": "framework_set_scheduler", 00:18:33.311 "params": { 00:18:33.311 "name": "static" 00:18:33.311 } 00:18:33.311 } 00:18:33.311 ] 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "subsystem": "nvmf", 00:18:33.311 "config": [ 00:18:33.311 { 00:18:33.311 "method": "nvmf_set_config", 00:18:33.311 "params": { 00:18:33.311 "discovery_filter": "match_any", 00:18:33.311 "admin_cmd_passthru": { 00:18:33.311 "identify_ctrlr": false 00:18:33.311 } 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "nvmf_set_max_subsystems", 00:18:33.311 "params": { 00:18:33.311 "max_subsystems": 1024 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "nvmf_set_crdt", 00:18:33.311 "params": { 00:18:33.311 "crdt1": 0, 00:18:33.311 "crdt2": 0, 00:18:33.311 "crdt3": 0 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "nvmf_create_transport", 00:18:33.311 "params": { 00:18:33.311 "trtype": "TCP", 00:18:33.311 "max_queue_depth": 128, 00:18:33.311 "max_io_qpairs_per_ctrlr": 127, 00:18:33.311 "in_capsule_data_size": 4096, 00:18:33.311 "max_io_size": 131072, 00:18:33.311 "io_unit_size": 131072, 00:18:33.311 "max_aq_depth": 128, 00:18:33.311 "num_shared_buffers": 511, 00:18:33.311 "buf_cache_size": 4294967295, 00:18:33.311 "dif_insert_or_strip": false, 00:18:33.311 "zcopy": false, 00:18:33.311 "c2h_success": false, 00:18:33.311 "sock_priority": 0, 00:18:33.311 "abort_timeout_sec": 1, 00:18:33.311 "ack_timeout": 0, 00:18:33.311 "data_wr_pool_size": 0 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "nvmf_create_subsystem", 00:18:33.311 "params": { 00:18:33.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.311 "allow_any_host": false, 00:18:33.311 "serial_number": "00000000000000000000", 00:18:33.311 "model_number": "SPDK bdev Controller", 00:18:33.311 "max_namespaces": 32, 00:18:33.311 "min_cntlid": 1, 00:18:33.311 "max_cntlid": 65519, 00:18:33.311 "ana_reporting": false 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "nvmf_subsystem_add_host", 00:18:33.311 "params": { 00:18:33.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.311 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.311 "psk": "key0" 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "nvmf_subsystem_add_ns", 00:18:33.311 "params": { 00:18:33.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.311 "namespace": { 00:18:33.311 "nsid": 1, 00:18:33.311 "bdev_name": "malloc0", 00:18:33.311 "nguid": "B168FC12A234448BA2A459BFDF02F95F", 00:18:33.311 "uuid": "b168fc12-a234-448b-a2a4-59bfdf02f95f", 00:18:33.311 "no_auto_visible": false 00:18:33.311 } 00:18:33.311 } 00:18:33.311 }, 00:18:33.311 { 00:18:33.311 "method": "nvmf_subsystem_add_listener", 00:18:33.311 "params": { 00:18:33.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.311 "listen_address": { 00:18:33.311 "trtype": "TCP", 00:18:33.311 "adrfam": "IPv4", 00:18:33.311 "traddr": "10.0.0.2", 00:18:33.311 "trsvcid": "4420" 00:18:33.311 }, 00:18:33.311 "secure_channel": true 00:18:33.311 } 00:18:33.311 } 00:18:33.311 ] 00:18:33.311 } 00:18:33.311 ] 00:18:33.311 }' 00:18:33.311 04:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:33.570 04:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:33.570 "subsystems": [ 00:18:33.570 { 00:18:33.570 "subsystem": "keyring", 00:18:33.570 "config": [ 00:18:33.570 { 00:18:33.570 "method": "keyring_file_add_key", 00:18:33.570 "params": { 00:18:33.570 "name": "key0", 00:18:33.570 "path": "/tmp/tmp.5Q5vkkqPe8" 00:18:33.570 } 00:18:33.570 } 00:18:33.570 ] 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "subsystem": "iobuf", 00:18:33.570 "config": [ 00:18:33.570 { 00:18:33.570 "method": "iobuf_set_options", 00:18:33.570 "params": { 00:18:33.570 "small_pool_count": 8192, 00:18:33.570 "large_pool_count": 1024, 00:18:33.570 "small_bufsize": 8192, 00:18:33.570 "large_bufsize": 135168 00:18:33.570 } 00:18:33.570 } 00:18:33.570 ] 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "subsystem": "sock", 00:18:33.570 "config": [ 00:18:33.570 { 00:18:33.570 "method": "sock_impl_set_options", 00:18:33.570 "params": { 00:18:33.570 "impl_name": "posix", 00:18:33.570 "recv_buf_size": 2097152, 00:18:33.570 "send_buf_size": 2097152, 00:18:33.570 "enable_recv_pipe": true, 00:18:33.570 "enable_quickack": false, 00:18:33.570 "enable_placement_id": 0, 00:18:33.570 "enable_zerocopy_send_server": true, 00:18:33.570 "enable_zerocopy_send_client": false, 00:18:33.570 "zerocopy_threshold": 0, 00:18:33.570 "tls_version": 0, 00:18:33.570 "enable_ktls": false 00:18:33.570 } 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "method": "sock_impl_set_options", 00:18:33.570 "params": { 00:18:33.570 "impl_name": "ssl", 00:18:33.570 "recv_buf_size": 4096, 00:18:33.570 "send_buf_size": 4096, 00:18:33.570 "enable_recv_pipe": true, 00:18:33.570 "enable_quickack": false, 00:18:33.570 "enable_placement_id": 0, 00:18:33.570 "enable_zerocopy_send_server": true, 00:18:33.570 "enable_zerocopy_send_client": false, 00:18:33.570 "zerocopy_threshold": 0, 00:18:33.570 "tls_version": 0, 00:18:33.570 "enable_ktls": false 00:18:33.570 } 00:18:33.570 } 00:18:33.570 ] 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "subsystem": "vmd", 00:18:33.570 "config": [] 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "subsystem": "accel", 00:18:33.570 "config": [ 00:18:33.570 { 00:18:33.570 "method": "accel_set_options", 00:18:33.570 "params": { 00:18:33.570 "small_cache_size": 128, 00:18:33.570 "large_cache_size": 16, 00:18:33.570 "task_count": 2048, 00:18:33.570 "sequence_count": 2048, 00:18:33.570 "buf_count": 2048 00:18:33.570 } 00:18:33.570 } 00:18:33.570 ] 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "subsystem": "bdev", 00:18:33.570 "config": [ 00:18:33.570 { 00:18:33.570 "method": "bdev_set_options", 00:18:33.570 "params": { 00:18:33.570 "bdev_io_pool_size": 65535, 00:18:33.570 "bdev_io_cache_size": 256, 00:18:33.570 "bdev_auto_examine": true, 00:18:33.570 "iobuf_small_cache_size": 128, 00:18:33.570 "iobuf_large_cache_size": 16 00:18:33.570 } 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "method": "bdev_raid_set_options", 00:18:33.570 "params": { 00:18:33.570 "process_window_size_kb": 1024 00:18:33.570 } 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "method": "bdev_iscsi_set_options", 00:18:33.570 "params": { 00:18:33.570 "timeout_sec": 30 00:18:33.570 } 00:18:33.570 }, 00:18:33.570 { 00:18:33.570 "method": "bdev_nvme_set_options", 00:18:33.570 "params": { 00:18:33.570 "action_on_timeout": "none", 00:18:33.570 "timeout_us": 0, 00:18:33.570 "timeout_admin_us": 0, 00:18:33.570 "keep_alive_timeout_ms": 10000, 00:18:33.570 "arbitration_burst": 0, 00:18:33.570 "low_priority_weight": 0, 00:18:33.570 "medium_priority_weight": 0, 00:18:33.570 "high_priority_weight": 0, 00:18:33.570 "nvme_adminq_poll_period_us": 10000, 00:18:33.570 "nvme_ioq_poll_period_us": 0, 00:18:33.570 "io_queue_requests": 512, 00:18:33.570 "delay_cmd_submit": true, 00:18:33.570 "transport_retry_count": 4, 00:18:33.570 "bdev_retry_count": 3, 00:18:33.570 "transport_ack_timeout": 0, 00:18:33.570 "ctrlr_loss_timeout_sec": 0, 00:18:33.570 "reconnect_delay_sec": 0, 00:18:33.570 "fast_io_fail_timeout_sec": 0, 00:18:33.570 "disable_auto_failback": false, 00:18:33.570 "generate_uuids": false, 00:18:33.570 "transport_tos": 0, 00:18:33.570 "nvme_error_stat": false, 00:18:33.570 "rdma_srq_size": 0, 00:18:33.570 "io_path_stat": false, 00:18:33.570 "allow_accel_sequence": false, 00:18:33.570 "rdma_max_cq_size": 0, 00:18:33.570 "rdma_cm_event_timeout_ms": 0, 00:18:33.570 "dhchap_digests": [ 00:18:33.570 "sha256", 00:18:33.570 "sha384", 00:18:33.570 "sha512" 00:18:33.570 ], 00:18:33.570 "dhchap_dhgroups": [ 00:18:33.570 "null", 00:18:33.570 "ffdhe2048", 00:18:33.570 "ffdhe3072", 00:18:33.570 "ffdhe4096", 00:18:33.570 "ffdhe6144", 00:18:33.570 "ffdhe8192" 00:18:33.570 ] 00:18:33.570 } 00:18:33.570 }, 00:18:33.571 { 00:18:33.571 "method": "bdev_nvme_attach_controller", 00:18:33.571 "params": { 00:18:33.571 "name": "nvme0", 00:18:33.571 "trtype": "TCP", 00:18:33.571 "adrfam": "IPv4", 00:18:33.571 "traddr": "10.0.0.2", 00:18:33.571 "trsvcid": "4420", 00:18:33.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.571 "prchk_reftag": false, 00:18:33.571 "prchk_guard": false, 00:18:33.571 "ctrlr_loss_timeout_sec": 0, 00:18:33.571 "reconnect_delay_sec": 0, 00:18:33.571 "fast_io_fail_timeout_sec": 0, 00:18:33.571 "psk": "key0", 00:18:33.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.571 "hdgst": false, 00:18:33.571 "ddgst": false 00:18:33.571 } 00:18:33.571 }, 00:18:33.571 { 00:18:33.571 "method": "bdev_nvme_set_hotplug", 00:18:33.571 "params": { 00:18:33.571 "period_us": 100000, 00:18:33.571 "enable": false 00:18:33.571 } 00:18:33.571 }, 00:18:33.571 { 00:18:33.571 "method": "bdev_enable_histogram", 00:18:33.571 "params": { 00:18:33.571 "name": "nvme0n1", 00:18:33.571 "enable": true 00:18:33.571 } 00:18:33.571 }, 00:18:33.571 { 00:18:33.571 "method": "bdev_wait_for_examine" 00:18:33.571 } 00:18:33.571 ] 00:18:33.571 }, 00:18:33.571 { 00:18:33.571 "subsystem": "nbd", 00:18:33.571 "config": [] 00:18:33.571 } 00:18:33.571 ] 00:18:33.571 }' 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3405727 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3405727 ']' 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3405727 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3405727 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3405727' 00:18:33.571 killing process with pid 3405727 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3405727 00:18:33.571 Received shutdown signal, test time was about 1.000000 seconds 00:18:33.571 00:18:33.571 Latency(us) 00:18:33.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.571 =================================================================================================================== 00:18:33.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.571 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3405727 00:18:33.830 04:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3405576 00:18:33.830 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3405576 ']' 00:18:33.830 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3405576 00:18:33.830 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:33.830 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.830 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3405576 00:18:33.830 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:33.831 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:33.831 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3405576' 00:18:33.831 killing process with pid 3405576 00:18:33.831 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3405576 00:18:33.831 [2024-05-15 04:19:21.697873] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:33.831 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3405576 00:18:34.089 04:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:34.089 04:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.089 04:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:34.089 "subsystems": [ 00:18:34.089 { 00:18:34.089 "subsystem": "keyring", 00:18:34.089 "config": [ 00:18:34.089 { 00:18:34.089 "method": "keyring_file_add_key", 00:18:34.089 "params": { 00:18:34.089 "name": "key0", 00:18:34.089 "path": "/tmp/tmp.5Q5vkkqPe8" 00:18:34.089 } 00:18:34.089 } 00:18:34.089 ] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "iobuf", 00:18:34.089 "config": [ 00:18:34.089 { 00:18:34.089 "method": "iobuf_set_options", 00:18:34.089 "params": { 00:18:34.089 "small_pool_count": 8192, 00:18:34.089 "large_pool_count": 1024, 00:18:34.089 "small_bufsize": 8192, 00:18:34.089 "large_bufsize": 135168 00:18:34.089 } 00:18:34.089 } 00:18:34.089 ] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "sock", 00:18:34.089 "config": [ 00:18:34.089 { 00:18:34.089 "method": "sock_impl_set_options", 00:18:34.089 "params": { 00:18:34.089 "impl_name": "posix", 00:18:34.089 "recv_buf_size": 2097152, 00:18:34.089 "send_buf_size": 2097152, 00:18:34.089 "enable_recv_pipe": true, 00:18:34.089 "enable_quickack": false, 00:18:34.089 "enable_placement_id": 0, 00:18:34.089 "enable_zerocopy_send_server": true, 00:18:34.089 "enable_zerocopy_send_client": false, 00:18:34.089 "zerocopy_threshold": 0, 00:18:34.089 "tls_version": 0, 00:18:34.089 "enable_ktls": false 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "sock_impl_set_options", 00:18:34.089 "params": { 00:18:34.089 "impl_name": "ssl", 00:18:34.089 "recv_buf_size": 4096, 00:18:34.089 "send_buf_size": 4096, 00:18:34.089 "enable_recv_pipe": true, 00:18:34.089 "enable_quickack": false, 00:18:34.089 "enable_placement_id": 0, 00:18:34.089 "enable_zerocopy_send_server": true, 00:18:34.089 "enable_zerocopy_send_client": false, 00:18:34.089 "zerocopy_threshold": 0, 00:18:34.089 "tls_version": 0, 00:18:34.089 "enable_ktls": false 00:18:34.089 } 00:18:34.089 } 00:18:34.089 ] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "vmd", 00:18:34.089 "config": [] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "accel", 00:18:34.089 "config": [ 00:18:34.089 { 00:18:34.089 "method": "accel_set_options", 00:18:34.089 "params": { 00:18:34.089 "small_cache_size": 128, 00:18:34.089 "large_cache_size": 16, 00:18:34.089 "task_count": 2048, 00:18:34.089 "sequence_count": 2048, 00:18:34.089 "buf_count": 2048 00:18:34.089 } 00:18:34.089 } 00:18:34.089 ] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "bdev", 00:18:34.089 "config": [ 00:18:34.089 { 00:18:34.089 "method": "bdev_set_options", 00:18:34.089 "params": { 00:18:34.089 "bdev_io_pool_size": 65535, 00:18:34.089 "bdev_io_cache_size": 256, 00:18:34.089 "bdev_auto_examine": true, 00:18:34.089 "iobuf_small_cache_size": 128, 00:18:34.089 "iobuf_large_cache_size": 16 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "bdev_raid_set_options", 00:18:34.089 "params": { 00:18:34.089 "process_window_size_kb": 1024 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "bdev_iscsi_set_options", 00:18:34.089 "params": { 00:18:34.089 "timeout_sec": 30 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "bdev_nvme_set_options", 00:18:34.089 "params": { 00:18:34.089 "action_on_timeout": "none", 00:18:34.089 "timeout_us": 0, 00:18:34.089 "timeout_admin_us": 0, 00:18:34.089 "keep_alive_timeout_ms": 10000, 00:18:34.089 "arbitration_burst": 0, 00:18:34.089 "low_priority_weight": 0, 00:18:34.089 "medium_priority_weight": 0, 00:18:34.089 "high_priority_weight": 0, 00:18:34.089 "nvme_adminq_poll_period_us": 10000, 00:18:34.089 "nvme_ioq_poll_period_us": 0, 00:18:34.089 "io_queue_requests": 0, 00:18:34.089 "delay_cmd_submit": true, 00:18:34.089 "transport_retry_count": 4, 00:18:34.089 "bdev_retry_count": 3, 00:18:34.089 "transport_ack_timeout": 0, 00:18:34.089 "ctrlr_loss_timeout_sec": 0, 00:18:34.089 "reconnect_delay_sec": 0, 00:18:34.089 "fast_io_fail_timeout_sec": 0, 00:18:34.089 "disable_auto_failback": false, 00:18:34.089 "generate_uuids": false, 00:18:34.089 "transport_tos": 0, 00:18:34.089 "nvme_error_stat": false, 00:18:34.089 "rdma_srq_size": 0, 00:18:34.089 "io_path_stat": false, 00:18:34.089 "allow_accel_sequence": false, 00:18:34.089 "rdma_max_cq_size": 0, 00:18:34.089 "rdma_cm_event_timeout_ms": 0, 00:18:34.089 "dhchap_digests": [ 00:18:34.089 "sha256", 00:18:34.089 "sha384", 00:18:34.089 "sha512" 00:18:34.089 ], 00:18:34.089 "dhchap_dhgroups": [ 00:18:34.089 "null", 00:18:34.089 "ffdhe2048", 00:18:34.089 "ffdhe3072", 00:18:34.089 "ffdhe4096", 00:18:34.089 "ffdhe6144", 00:18:34.089 "ffdhe8192" 00:18:34.089 ] 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "bdev_nvme_set_hotplug", 00:18:34.089 "params": { 00:18:34.089 "period_us": 100000, 00:18:34.089 "enable": false 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "bdev_malloc_create", 00:18:34.089 "params": { 00:18:34.089 "name": "malloc0", 00:18:34.089 "num_blocks": 8192, 00:18:34.089 "block_size": 4096, 00:18:34.089 "physical_block_size": 4096, 00:18:34.089 "uuid": "b168fc12-a234-448b-a2a4-59bfdf02f95f", 00:18:34.089 "optimal_io_boundary": 0 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "bdev_wait_for_examine" 00:18:34.089 } 00:18:34.089 ] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "nbd", 00:18:34.089 "config": [] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "scheduler", 00:18:34.089 "config": [ 00:18:34.089 { 00:18:34.089 "method": "framework_set_scheduler", 00:18:34.089 "params": { 00:18:34.089 "name": "static" 00:18:34.089 } 00:18:34.089 } 00:18:34.089 ] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "nvmf", 00:18:34.089 "config": [ 00:18:34.089 { 00:18:34.089 "method": "nvmf_set_config", 00:18:34.089 "params": { 00:18:34.089 "discovery_filter": "match_any", 00:18:34.089 "admin_cmd_passthru": { 00:18:34.089 "identify_ctrlr": false 00:18:34.089 } 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "nvmf_set_max_subsystems", 00:18:34.089 "params": { 00:18:34.089 "max_subsystems": 1024 00:18:34.089 } 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "method": "nvmf_set_crdt", 00:18:34.089 "params": { 00:18:34.089 "crdt1": 0, 00:18:34.090 "crdt2": 0, 00:18:34.090 "crdt3": 0 00:18:34.090 } 00:18:34.090 }, 00:18:34.090 { 00:18:34.090 "method": "nvmf_create_transport", 00:18:34.090 "params": { 00:18:34.090 "trtype": "TCP", 00:18:34.090 "max_queue_depth": 128, 00:18:34.090 "max_io_qpairs_per_ctrlr": 127, 00:18:34.090 "in_capsule_data_size": 4096, 00:18:34.090 "max_io_size": 131072, 00:18:34.090 "io_unit_size": 131072, 00:18:34.090 "max_aq_depth": 128, 00:18:34.090 "num_shared_buffers": 511, 00:18:34.090 "buf_cache_size": 4294967295, 00:18:34.090 "dif_insert_or_strip": false, 00:18:34.090 "zcopy": false, 00:18:34.090 "c2h_success": false, 00:18:34.090 "sock_priority": 0, 00:18:34.090 "abort_timeout_sec": 1, 00:18:34.090 "ack_timeout": 0, 00:18:34.090 "data_wr_pool_size": 0 00:18:34.090 } 00:18:34.090 }, 00:18:34.090 { 00:18:34.090 "method": "nvmf_create_subsystem", 00:18:34.090 "params": { 00:18:34.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.090 "allow_any_host": false, 00:18:34.090 "serial_number": "00000000000000000000", 00:18:34.090 "model_number": "SPDK bdev Controller", 00:18:34.090 "max_namespaces": 32, 00:18:34.090 "min_cntlid": 1, 00:18:34.090 "max_cntlid": 65519, 00:18:34.090 "ana_reporting": false 00:18:34.090 } 00:18:34.090 }, 00:18:34.090 { 00:18:34.090 "method": "nvmf_subsystem_add_host", 00:18:34.090 "params": { 00:18:34.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.090 "host": "nqn.2016-06.io.spdk:host1", 00:18:34.090 "psk": "key0" 00:18:34.090 } 00:18:34.090 }, 00:18:34.090 { 00:18:34.090 "method": "nvmf_subsystem_add_ns", 00:18:34.090 "params": { 00:18:34.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.090 "namespace": { 00:18:34.090 "nsid": 1, 00:18:34.090 "bdev_name": "malloc0", 00:18:34.090 "nguid": "B168FC12A234448BA2A459BFDF02F95F", 00:18:34.090 "uuid": "b168fc12-a234-448b-a2a4-59bfdf02f95f", 00:18:34.090 "no_auto_visible": false 00:18:34.090 } 00:18:34.090 } 00:18:34.090 }, 00:18:34.090 { 00:18:34.090 "method": "nvmf_subsystem_add_listener", 00:18:34.090 "params": { 00:18:34.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.090 "listen_address": { 00:18:34.090 "trtype": "TCP", 00:18:34.090 "adrfam": "IPv4", 00:18:34.090 "traddr": "10.0.0.2", 00:18:34.090 "trsvcid": "4420" 00:18:34.090 }, 00:18:34.090 "secure_channel": true 00:18:34.090 } 00:18:34.090 } 00:18:34.090 ] 00:18:34.090 } 00:18:34.090 ] 00:18:34.090 }' 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3406265 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3406265 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3406265 ']' 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.090 04:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.090 [2024-05-15 04:19:22.042730] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:34.090 [2024-05-15 04:19:22.042811] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.090 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.348 [2024-05-15 04:19:22.122402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.348 [2024-05-15 04:19:22.237586] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.348 [2024-05-15 04:19:22.237657] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.348 [2024-05-15 04:19:22.237673] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.348 [2024-05-15 04:19:22.237687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.348 [2024-05-15 04:19:22.237698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.348 [2024-05-15 04:19:22.237799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.606 [2024-05-15 04:19:22.478072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.606 [2024-05-15 04:19:22.510025] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:34.606 [2024-05-15 04:19:22.510113] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.606 [2024-05-15 04:19:22.522150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3406413 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3406413 /var/tmp/bdevperf.sock 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3406413 ']' 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.173 04:19:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:35.173 "subsystems": [ 00:18:35.173 { 00:18:35.173 "subsystem": "keyring", 00:18:35.173 "config": [ 00:18:35.173 { 00:18:35.173 "method": "keyring_file_add_key", 00:18:35.173 "params": { 00:18:35.173 "name": "key0", 00:18:35.173 "path": "/tmp/tmp.5Q5vkkqPe8" 00:18:35.173 } 00:18:35.173 } 00:18:35.173 ] 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "subsystem": "iobuf", 00:18:35.173 "config": [ 00:18:35.173 { 00:18:35.173 "method": "iobuf_set_options", 00:18:35.173 "params": { 00:18:35.173 "small_pool_count": 8192, 00:18:35.173 "large_pool_count": 1024, 00:18:35.173 "small_bufsize": 8192, 00:18:35.173 "large_bufsize": 135168 00:18:35.173 } 00:18:35.173 } 00:18:35.173 ] 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "subsystem": "sock", 00:18:35.173 "config": [ 00:18:35.173 { 00:18:35.173 "method": "sock_impl_set_options", 00:18:35.173 "params": { 00:18:35.173 "impl_name": "posix", 00:18:35.173 "recv_buf_size": 2097152, 00:18:35.173 "send_buf_size": 2097152, 00:18:35.173 "enable_recv_pipe": true, 00:18:35.173 "enable_quickack": false, 00:18:35.173 "enable_placement_id": 0, 00:18:35.173 "enable_zerocopy_send_server": true, 00:18:35.173 "enable_zerocopy_send_client": false, 00:18:35.173 "zerocopy_threshold": 0, 00:18:35.173 "tls_version": 0, 00:18:35.173 "enable_ktls": false 00:18:35.173 } 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "method": "sock_impl_set_options", 00:18:35.173 "params": { 00:18:35.173 "impl_name": "ssl", 00:18:35.173 "recv_buf_size": 4096, 00:18:35.173 "send_buf_size": 4096, 00:18:35.173 "enable_recv_pipe": true, 00:18:35.173 "enable_quickack": false, 00:18:35.173 "enable_placement_id": 0, 00:18:35.173 "enable_zerocopy_send_server": true, 00:18:35.173 "enable_zerocopy_send_client": false, 00:18:35.173 "zerocopy_threshold": 0, 00:18:35.173 "tls_version": 0, 00:18:35.173 "enable_ktls": false 00:18:35.173 } 00:18:35.173 } 00:18:35.173 ] 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "subsystem": "vmd", 00:18:35.173 "config": [] 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "subsystem": "accel", 00:18:35.173 "config": [ 00:18:35.173 { 00:18:35.173 "method": "accel_set_options", 00:18:35.173 "params": { 00:18:35.173 "small_cache_size": 128, 00:18:35.173 "large_cache_size": 16, 00:18:35.173 "task_count": 2048, 00:18:35.173 "sequence_count": 2048, 00:18:35.173 "buf_count": 2048 00:18:35.173 } 00:18:35.173 } 00:18:35.173 ] 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "subsystem": "bdev", 00:18:35.173 "config": [ 00:18:35.173 { 00:18:35.173 "method": "bdev_set_options", 00:18:35.173 "params": { 00:18:35.173 "bdev_io_pool_size": 65535, 00:18:35.173 "bdev_io_cache_size": 256, 00:18:35.173 "bdev_auto_examine": true, 00:18:35.173 "iobuf_small_cache_size": 128, 00:18:35.173 "iobuf_large_cache_size": 16 00:18:35.173 } 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "method": "bdev_raid_set_options", 00:18:35.173 "params": { 00:18:35.173 "process_window_size_kb": 1024 00:18:35.173 } 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "method": "bdev_iscsi_set_options", 00:18:35.173 "params": { 00:18:35.173 "timeout_sec": 30 00:18:35.173 } 00:18:35.173 }, 00:18:35.173 { 00:18:35.173 "method": "bdev_nvme_set_options", 00:18:35.173 "params": { 00:18:35.173 "action_on_timeout": "none", 00:18:35.173 "timeout_us": 0, 00:18:35.173 "timeout_admin_us": 0, 00:18:35.173 "keep_alive_timeout_ms": 10000, 00:18:35.173 "arbitration_burst": 0, 00:18:35.173 "low_priority_weight": 0, 00:18:35.173 "medium_priority_weight": 0, 00:18:35.173 "high_priority_weight": 0, 00:18:35.173 "nvme_adminq_poll_period_us": 10000, 00:18:35.173 "nvme_ioq_poll_period_us": 0, 00:18:35.173 "io_queue_requests": 512, 00:18:35.173 "delay_cmd_submit": true, 00:18:35.173 "transport_retry_count": 4, 00:18:35.173 "bdev_retry_count": 3, 00:18:35.173 "transport_ack_timeout": 0, 00:18:35.173 "ctrlr_loss_timeout_sec": 0, 00:18:35.173 "reconnect_delay_sec": 0, 00:18:35.173 "fast_io_fail_timeout_sec": 0, 00:18:35.173 "disable_auto_failback": false, 00:18:35.173 "generate_uuids": false, 00:18:35.173 "transport_tos": 0, 00:18:35.173 "nvme_error_stat": false, 00:18:35.173 "rdma_srq_size": 0, 00:18:35.173 "io_path_stat": false, 00:18:35.173 "allow_accel_sequence": false, 00:18:35.173 "rdma_max_cq_size": 0, 00:18:35.173 "rdma_cm_event_timeout_ms": 0, 00:18:35.174 "dhchap_digests": [ 00:18:35.174 "sha256", 00:18:35.174 "sha384", 00:18:35.174 "sha512" 00:18:35.174 ], 00:18:35.174 "dhchap_dhgroups": [ 00:18:35.174 "null", 00:18:35.174 "ffdhe2048", 00:18:35.174 "ffdhe3072", 00:18:35.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.174 "ffdhe4096", 00:18:35.174 "ffdhe6144", 00:18:35.174 "ffdhe8192" 00:18:35.174 ] 00:18:35.174 } 00:18:35.174 }, 00:18:35.174 { 00:18:35.174 "method": "bdev_nvme_attach_controller", 00:18:35.174 "params": { 00:18:35.174 "name": "nvme0", 00:18:35.174 "trtype": "TCP", 00:18:35.174 "adrfam": "IPv4", 00:18:35.174 "traddr": "10.0.0.2", 00:18:35.174 "trsvcid": "4420", 00:18:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.174 "prchk_reftag": false, 00:18:35.174 "prchk_guard": false, 00:18:35.174 "ctrlr_loss_timeout_sec": 0, 00:18:35.174 "reconnect_delay_sec": 0, 00:18:35.174 "fast_io_fail_timeout_sec": 0, 00:18:35.174 "psk": "key0", 00:18:35.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.174 "hdgst": false, 00:18:35.174 "ddgst": false 00:18:35.174 } 00:18:35.174 }, 00:18:35.174 { 00:18:35.174 "method": "bdev_nvme_set_hotplug", 00:18:35.174 "params": { 00:18:35.174 "period_us": 100000, 00:18:35.174 "enable": false 00:18:35.174 } 00:18:35.174 }, 00:18:35.174 { 00:18:35.174 "method": "bdev_enable_histogram", 00:18:35.174 "params": { 00:18:35.174 "name": "nvme0n1", 00:18:35.174 "enable": true 00:18:35.174 } 00:18:35.174 }, 00:18:35.174 { 00:18:35.174 "method": "bdev_wait_for_examine" 00:18:35.174 } 00:18:35.174 ] 00:18:35.174 }, 00:18:35.174 { 00:18:35.174 "subsystem": "nbd", 00:18:35.174 "config": [] 00:18:35.174 } 00:18:35.174 ] 00:18:35.174 }' 00:18:35.174 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.174 04:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.174 [2024-05-15 04:19:23.092613] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:35.174 [2024-05-15 04:19:23.092688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406413 ] 00:18:35.174 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.174 [2024-05-15 04:19:23.164847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.433 [2024-05-15 04:19:23.284858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.689 [2024-05-15 04:19:23.466276] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.255 04:19:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:36.255 04:19:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:36.255 04:19:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:36.255 04:19:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:36.513 04:19:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.513 04:19:24 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.513 Running I/O for 1 seconds... 00:18:37.887 00:18:37.887 Latency(us) 00:18:37.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.887 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.887 Verification LBA range: start 0x0 length 0x2000 00:18:37.887 nvme0n1 : 1.08 1392.36 5.44 0.00 0.00 89030.31 6893.42 149907.53 00:18:37.887 =================================================================================================================== 00:18:37.887 Total : 1392.36 5.44 0.00 0.00 89030.31 6893.42 149907.53 00:18:37.887 0 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:37.887 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:37.888 nvmf_trace.0 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3406413 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3406413 ']' 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3406413 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3406413 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3406413' 00:18:37.888 killing process with pid 3406413 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3406413 00:18:37.888 Received shutdown signal, test time was about 1.000000 seconds 00:18:37.888 00:18:37.888 Latency(us) 00:18:37.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.888 =================================================================================================================== 00:18:37.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.888 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3406413 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.146 rmmod nvme_tcp 00:18:38.146 rmmod nvme_fabrics 00:18:38.146 rmmod nvme_keyring 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3406265 ']' 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3406265 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3406265 ']' 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3406265 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:38.146 04:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3406265 00:18:38.146 04:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:38.146 04:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:38.146 04:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3406265' 00:18:38.146 killing process with pid 3406265 00:18:38.146 04:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3406265 00:18:38.146 [2024-05-15 04:19:26.005905] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:38.146 04:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3406265 00:18:38.403 04:19:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.403 04:19:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:38.403 04:19:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:38.403 04:19:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:38.403 04:19:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:38.403 04:19:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.403 04:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.404 04:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.363 04:19:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:40.363 04:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rpE9gBMRwv /tmp/tmp.PpeQ7KWugV /tmp/tmp.5Q5vkkqPe8 00:18:40.363 00:18:40.363 real 1m26.295s 00:18:40.363 user 2m16.249s 00:18:40.363 sys 0m29.642s 00:18:40.363 04:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:40.363 04:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.363 ************************************ 00:18:40.363 END TEST nvmf_tls 00:18:40.363 ************************************ 00:18:40.363 04:19:28 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:40.363 04:19:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:40.363 04:19:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:40.363 04:19:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.623 ************************************ 00:18:40.623 START TEST nvmf_fips 00:18:40.623 ************************************ 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:40.623 * Looking for test storage... 00:18:40.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:40.623 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:40.624 Error setting digest 00:18:40.624 0092AA951D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:40.624 0092AA951D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:40.624 04:19:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:43.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:43.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:43.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:43.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.156 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:43.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:18:43.157 00:18:43.157 --- 10.0.0.2 ping statistics --- 00:18:43.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.157 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:18:43.157 00:18:43.157 --- 10.0.0.1 ping statistics --- 00:18:43.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.157 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.157 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3409071 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3409071 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3409071 ']' 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:43.416 04:19:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:43.416 [2024-05-15 04:19:31.268005] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:43.416 [2024-05-15 04:19:31.268087] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.416 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.416 [2024-05-15 04:19:31.349091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.674 [2024-05-15 04:19:31.465141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.674 [2024-05-15 04:19:31.465201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.674 [2024-05-15 04:19:31.465217] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.674 [2024-05-15 04:19:31.465231] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.674 [2024-05-15 04:19:31.465243] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.674 [2024-05-15 04:19:31.465275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:44.239 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:44.240 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:44.498 [2024-05-15 04:19:32.453947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.498 [2024-05-15 04:19:32.469892] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:44.498 [2024-05-15 04:19:32.469973] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.498 [2024-05-15 04:19:32.470167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.498 [2024-05-15 04:19:32.501175] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:44.498 malloc0 00:18:44.756 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3409224 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3409224 /var/tmp/bdevperf.sock 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3409224 ']' 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:44.757 04:19:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:44.757 [2024-05-15 04:19:32.597222] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:18:44.757 [2024-05-15 04:19:32.597306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409224 ] 00:18:44.757 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.757 [2024-05-15 04:19:32.670206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.015 [2024-05-15 04:19:32.776229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.580 04:19:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.580 04:19:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:45.580 04:19:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:45.879 [2024-05-15 04:19:33.710895] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.879 [2024-05-15 04:19:33.711048] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:45.879 TLSTESTn1 00:18:45.879 04:19:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.140 Running I/O for 10 seconds... 00:18:56.110 00:18:56.110 Latency(us) 00:18:56.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.110 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.110 Verification LBA range: start 0x0 length 0x2000 00:18:56.110 TLSTESTn1 : 10.09 1239.36 4.84 0.00 0.00 102923.71 11796.48 135149.80 00:18:56.110 =================================================================================================================== 00:18:56.110 Total : 1239.36 4.84 0.00 0.00 102923.71 11796.48 135149.80 00:18:56.110 0 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:56.110 nvmf_trace.0 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3409224 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3409224 ']' 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3409224 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:56.110 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409224 00:18:56.369 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:56.369 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:56.369 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409224' 00:18:56.369 killing process with pid 3409224 00:18:56.369 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3409224 00:18:56.369 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.369 00:18:56.369 Latency(us) 00:18:56.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.369 =================================================================================================================== 00:18:56.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.369 [2024-05-15 04:19:44.151022] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:56.369 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3409224 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.627 rmmod nvme_tcp 00:18:56.627 rmmod nvme_fabrics 00:18:56.627 rmmod nvme_keyring 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3409071 ']' 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3409071 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3409071 ']' 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3409071 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409071 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409071' 00:18:56.627 killing process with pid 3409071 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3409071 00:18:56.627 [2024-05-15 04:19:44.484160] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:56.627 [2024-05-15 04:19:44.484209] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:56.627 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3409071 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.887 04:19:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.792 04:19:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:58.792 04:19:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.792 00:18:58.792 real 0m18.411s 00:18:58.792 user 0m23.168s 00:18:58.792 sys 0m6.740s 00:18:58.792 04:19:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:58.792 04:19:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:58.792 ************************************ 00:18:58.792 END TEST nvmf_fips 00:18:58.792 ************************************ 00:18:59.050 04:19:46 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:59.050 04:19:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:59.050 04:19:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:59.050 04:19:46 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:59.050 04:19:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.050 04:19:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:01.580 04:19:49 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:01.581 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:01.581 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:01.581 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:01.581 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:01.581 04:19:49 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:01.581 04:19:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:01.581 04:19:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:01.581 04:19:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:01.581 ************************************ 00:19:01.581 START TEST nvmf_perf_adq 00:19:01.581 ************************************ 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:01.581 * Looking for test storage... 00:19:01.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.581 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.582 04:19:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:04.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:04.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:04.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:04.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:04.112 04:19:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:04.680 04:19:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:06.586 04:19:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:11.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:11.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:11.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:11.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.859 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:11.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:19:11.860 00:19:11.860 --- 10.0.0.2 ping statistics --- 00:19:11.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.860 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:19:11.860 00:19:11.860 --- 10.0.0.1 ping statistics --- 00:19:11.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.860 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3415799 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3415799 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3415799 ']' 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:11.860 04:19:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:11.860 [2024-05-15 04:19:59.375195] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:11.860 [2024-05-15 04:19:59.375278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.860 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.860 [2024-05-15 04:19:59.457810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.860 [2024-05-15 04:19:59.575545] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.860 [2024-05-15 04:19:59.575604] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.860 [2024-05-15 04:19:59.575621] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.860 [2024-05-15 04:19:59.575635] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.860 [2024-05-15 04:19:59.575647] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.860 [2024-05-15 04:19:59.575703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.860 [2024-05-15 04:19:59.575756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.860 [2024-05-15 04:19:59.575877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.860 [2024-05-15 04:19:59.575879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.427 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.686 [2024-05-15 04:20:00.494557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.686 Malloc1 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.686 [2024-05-15 04:20:00.545058] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:12.686 [2024-05-15 04:20:00.545355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3415959 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:12.686 04:20:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:12.686 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.617 04:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:14.617 04:20:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.617 04:20:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.617 04:20:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.617 04:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:14.617 "tick_rate": 2700000000, 00:19:14.617 "poll_groups": [ 00:19:14.617 { 00:19:14.617 "name": "nvmf_tgt_poll_group_000", 00:19:14.617 "admin_qpairs": 1, 00:19:14.617 "io_qpairs": 1, 00:19:14.617 "current_admin_qpairs": 1, 00:19:14.617 "current_io_qpairs": 1, 00:19:14.617 "pending_bdev_io": 0, 00:19:14.617 "completed_nvme_io": 17217, 00:19:14.617 "transports": [ 00:19:14.617 { 00:19:14.617 "trtype": "TCP" 00:19:14.617 } 00:19:14.617 ] 00:19:14.617 }, 00:19:14.617 { 00:19:14.617 "name": "nvmf_tgt_poll_group_001", 00:19:14.617 "admin_qpairs": 0, 00:19:14.617 "io_qpairs": 1, 00:19:14.617 "current_admin_qpairs": 0, 00:19:14.617 "current_io_qpairs": 1, 00:19:14.617 "pending_bdev_io": 0, 00:19:14.617 "completed_nvme_io": 20353, 00:19:14.617 "transports": [ 00:19:14.617 { 00:19:14.617 "trtype": "TCP" 00:19:14.617 } 00:19:14.617 ] 00:19:14.617 }, 00:19:14.617 { 00:19:14.617 "name": "nvmf_tgt_poll_group_002", 00:19:14.617 "admin_qpairs": 0, 00:19:14.617 "io_qpairs": 1, 00:19:14.617 "current_admin_qpairs": 0, 00:19:14.617 "current_io_qpairs": 1, 00:19:14.617 "pending_bdev_io": 0, 00:19:14.617 "completed_nvme_io": 20670, 00:19:14.617 "transports": [ 00:19:14.617 { 00:19:14.617 "trtype": "TCP" 00:19:14.617 } 00:19:14.617 ] 00:19:14.617 }, 00:19:14.617 { 00:19:14.617 "name": "nvmf_tgt_poll_group_003", 00:19:14.617 "admin_qpairs": 0, 00:19:14.617 "io_qpairs": 1, 00:19:14.617 "current_admin_qpairs": 0, 00:19:14.617 "current_io_qpairs": 1, 00:19:14.617 "pending_bdev_io": 0, 00:19:14.617 "completed_nvme_io": 19146, 00:19:14.617 "transports": [ 00:19:14.617 { 00:19:14.617 "trtype": "TCP" 00:19:14.617 } 00:19:14.617 ] 00:19:14.617 } 00:19:14.617 ] 00:19:14.617 }' 00:19:14.617 04:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:14.617 04:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:14.618 04:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:14.618 04:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:14.618 04:20:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3415959 00:19:22.727 Initializing NVMe Controllers 00:19:22.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:22.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:22.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:22.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:22.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:22.727 Initialization complete. Launching workers. 00:19:22.727 ======================================================== 00:19:22.727 Latency(us) 00:19:22.727 Device Information : IOPS MiB/s Average min max 00:19:22.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10042.90 39.23 6373.11 2043.07 10862.99 00:19:22.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10674.20 41.70 5997.30 2950.77 9847.36 00:19:22.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10830.40 42.31 5910.42 1981.74 9335.23 00:19:22.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9069.60 35.43 7060.27 4187.80 10787.45 00:19:22.727 ======================================================== 00:19:22.727 Total : 40617.10 158.66 6304.41 1981.74 10862.99 00:19:22.727 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.727 rmmod nvme_tcp 00:19:22.727 rmmod nvme_fabrics 00:19:22.727 rmmod nvme_keyring 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3415799 ']' 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3415799 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3415799 ']' 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3415799 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:22.727 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3415799 00:19:22.985 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:22.985 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:22.985 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3415799' 00:19:22.985 killing process with pid 3415799 00:19:22.985 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3415799 00:19:22.985 [2024-05-15 04:20:10.752567] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:22.985 04:20:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3415799 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.245 04:20:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.149 04:20:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.149 04:20:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:25.149 04:20:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:25.715 04:20:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:27.614 04:20:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:32.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:32.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:32.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:32.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.884 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:32.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:19:32.885 00:19:32.885 --- 10.0.0.2 ping statistics --- 00:19:32.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.885 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:19:32.885 00:19:32.885 --- 10.0.0.1 ping statistics --- 00:19:32.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.885 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:32.885 net.core.busy_poll = 1 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:32.885 net.core.busy_read = 1 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3418462 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3418462 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3418462 ']' 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:32.885 04:20:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:32.885 [2024-05-15 04:20:20.502026] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:32.885 [2024-05-15 04:20:20.502109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.885 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.885 [2024-05-15 04:20:20.585736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.885 [2024-05-15 04:20:20.702893] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.885 [2024-05-15 04:20:20.702965] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.885 [2024-05-15 04:20:20.702990] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.885 [2024-05-15 04:20:20.703003] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.885 [2024-05-15 04:20:20.703015] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.885 [2024-05-15 04:20:20.703083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.885 [2024-05-15 04:20:20.703136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.885 [2024-05-15 04:20:20.703243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.885 [2024-05-15 04:20:20.703246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.450 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:33.450 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:19:33.450 04:20:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.450 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.450 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.450 04:20:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.450 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 [2024-05-15 04:20:21.609496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 Malloc1 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 [2024-05-15 04:20:21.659880] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:33.707 [2024-05-15 04:20:21.660233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3418622 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:33.707 04:20:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:33.707 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:36.243 "tick_rate": 2700000000, 00:19:36.243 "poll_groups": [ 00:19:36.243 { 00:19:36.243 "name": "nvmf_tgt_poll_group_000", 00:19:36.243 "admin_qpairs": 1, 00:19:36.243 "io_qpairs": 3, 00:19:36.243 "current_admin_qpairs": 1, 00:19:36.243 "current_io_qpairs": 3, 00:19:36.243 "pending_bdev_io": 0, 00:19:36.243 "completed_nvme_io": 30027, 00:19:36.243 "transports": [ 00:19:36.243 { 00:19:36.243 "trtype": "TCP" 00:19:36.243 } 00:19:36.243 ] 00:19:36.243 }, 00:19:36.243 { 00:19:36.243 "name": "nvmf_tgt_poll_group_001", 00:19:36.243 "admin_qpairs": 0, 00:19:36.243 "io_qpairs": 1, 00:19:36.243 "current_admin_qpairs": 0, 00:19:36.243 "current_io_qpairs": 1, 00:19:36.243 "pending_bdev_io": 0, 00:19:36.243 "completed_nvme_io": 12801, 00:19:36.243 "transports": [ 00:19:36.243 { 00:19:36.243 "trtype": "TCP" 00:19:36.243 } 00:19:36.243 ] 00:19:36.243 }, 00:19:36.243 { 00:19:36.243 "name": "nvmf_tgt_poll_group_002", 00:19:36.243 "admin_qpairs": 0, 00:19:36.243 "io_qpairs": 0, 00:19:36.243 "current_admin_qpairs": 0, 00:19:36.243 "current_io_qpairs": 0, 00:19:36.243 "pending_bdev_io": 0, 00:19:36.243 "completed_nvme_io": 0, 00:19:36.243 "transports": [ 00:19:36.243 { 00:19:36.243 "trtype": "TCP" 00:19:36.243 } 00:19:36.243 ] 00:19:36.243 }, 00:19:36.243 { 00:19:36.243 "name": "nvmf_tgt_poll_group_003", 00:19:36.243 "admin_qpairs": 0, 00:19:36.243 "io_qpairs": 0, 00:19:36.243 "current_admin_qpairs": 0, 00:19:36.243 "current_io_qpairs": 0, 00:19:36.243 "pending_bdev_io": 0, 00:19:36.243 "completed_nvme_io": 0, 00:19:36.243 "transports": [ 00:19:36.243 { 00:19:36.243 "trtype": "TCP" 00:19:36.243 } 00:19:36.243 ] 00:19:36.243 } 00:19:36.243 ] 00:19:36.243 }' 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:36.243 04:20:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3418622 00:19:44.392 Initializing NVMe Controllers 00:19:44.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:44.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:44.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:44.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:44.392 Initialization complete. Launching workers. 00:19:44.392 ======================================================== 00:19:44.392 Latency(us) 00:19:44.392 Device Information : IOPS MiB/s Average min max 00:19:44.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6437.90 25.15 9945.78 2313.76 51775.85 00:19:44.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5432.00 21.22 11789.70 2005.73 61429.64 00:19:44.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4903.40 19.15 13060.39 1936.78 60375.50 00:19:44.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5641.40 22.04 11352.27 1767.76 58613.85 00:19:44.392 ======================================================== 00:19:44.392 Total : 22414.70 87.56 11427.97 1767.76 61429.64 00:19:44.392 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.392 rmmod nvme_tcp 00:19:44.392 rmmod nvme_fabrics 00:19:44.392 rmmod nvme_keyring 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3418462 ']' 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3418462 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3418462 ']' 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3418462 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418462 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418462' 00:19:44.392 killing process with pid 3418462 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3418462 00:19:44.392 [2024-05-15 04:20:31.877793] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:44.392 04:20:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3418462 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.392 04:20:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.683 04:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.683 04:20:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:47.683 00:19:47.683 real 0m45.842s 00:19:47.683 user 2m31.265s 00:19:47.683 sys 0m14.896s 00:19:47.683 04:20:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:47.683 04:20:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.683 ************************************ 00:19:47.683 END TEST nvmf_perf_adq 00:19:47.683 ************************************ 00:19:47.683 04:20:35 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:47.683 04:20:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:47.683 04:20:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:47.683 04:20:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:47.683 ************************************ 00:19:47.683 START TEST nvmf_shutdown 00:19:47.684 ************************************ 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:47.684 * Looking for test storage... 00:19:47.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:47.684 ************************************ 00:19:47.684 START TEST nvmf_shutdown_tc1 00:19:47.684 ************************************ 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.684 04:20:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.215 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.215 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:50.215 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:50.215 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:50.215 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:50.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:50.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:50.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:50.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:50.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:19:50.216 00:19:50.216 --- 10.0.0.2 ping statistics --- 00:19:50.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.216 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:19:50.216 00:19:50.216 --- 10.0.0.1 ping statistics --- 00:19:50.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.216 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:50.216 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3422320 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3422320 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3422320 ']' 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:50.217 04:20:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.217 [2024-05-15 04:20:38.035622] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:50.217 [2024-05-15 04:20:38.035703] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.217 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.217 [2024-05-15 04:20:38.116662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.475 [2024-05-15 04:20:38.234635] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.475 [2024-05-15 04:20:38.234692] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.475 [2024-05-15 04:20:38.234720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.475 [2024-05-15 04:20:38.234732] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.475 [2024-05-15 04:20:38.234742] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.475 [2024-05-15 04:20:38.234816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.475 [2024-05-15 04:20:38.234876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.475 [2024-05-15 04:20:38.234936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.475 [2024-05-15 04:20:38.234927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:51.040 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:51.040 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:19:51.040 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.040 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.040 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.040 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.041 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.041 04:20:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.041 [2024-05-15 04:20:38.994497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.041 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.299 Malloc1 00:19:51.299 [2024-05-15 04:20:39.079719] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:51.299 [2024-05-15 04:20:39.080083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.299 Malloc2 00:19:51.299 Malloc3 00:19:51.299 Malloc4 00:19:51.299 Malloc5 00:19:51.299 Malloc6 00:19:51.556 Malloc7 00:19:51.556 Malloc8 00:19:51.556 Malloc9 00:19:51.556 Malloc10 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3422511 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3422511 /var/tmp/bdevperf.sock 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3422511 ']' 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.556 { 00:19:51.556 "params": { 00:19:51.556 "name": "Nvme$subsystem", 00:19:51.556 "trtype": "$TEST_TRANSPORT", 00:19:51.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.556 "adrfam": "ipv4", 00:19:51.556 "trsvcid": "$NVMF_PORT", 00:19:51.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.556 "hdgst": ${hdgst:-false}, 00:19:51.556 "ddgst": ${ddgst:-false} 00:19:51.556 }, 00:19:51.556 "method": "bdev_nvme_attach_controller" 00:19:51.556 } 00:19:51.556 EOF 00:19:51.556 )") 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.556 { 00:19:51.556 "params": { 00:19:51.556 "name": "Nvme$subsystem", 00:19:51.556 "trtype": "$TEST_TRANSPORT", 00:19:51.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.556 "adrfam": "ipv4", 00:19:51.556 "trsvcid": "$NVMF_PORT", 00:19:51.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.556 "hdgst": ${hdgst:-false}, 00:19:51.556 "ddgst": ${ddgst:-false} 00:19:51.556 }, 00:19:51.556 "method": "bdev_nvme_attach_controller" 00:19:51.556 } 00:19:51.556 EOF 00:19:51.556 )") 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.556 { 00:19:51.556 "params": { 00:19:51.556 "name": "Nvme$subsystem", 00:19:51.556 "trtype": "$TEST_TRANSPORT", 00:19:51.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.556 "adrfam": "ipv4", 00:19:51.556 "trsvcid": "$NVMF_PORT", 00:19:51.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.556 "hdgst": ${hdgst:-false}, 00:19:51.556 "ddgst": ${ddgst:-false} 00:19:51.556 }, 00:19:51.556 "method": "bdev_nvme_attach_controller" 00:19:51.556 } 00:19:51.556 EOF 00:19:51.556 )") 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.556 { 00:19:51.556 "params": { 00:19:51.556 "name": "Nvme$subsystem", 00:19:51.556 "trtype": "$TEST_TRANSPORT", 00:19:51.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.556 "adrfam": "ipv4", 00:19:51.556 "trsvcid": "$NVMF_PORT", 00:19:51.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.556 "hdgst": ${hdgst:-false}, 00:19:51.556 "ddgst": ${ddgst:-false} 00:19:51.556 }, 00:19:51.556 "method": "bdev_nvme_attach_controller" 00:19:51.556 } 00:19:51.556 EOF 00:19:51.556 )") 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.556 { 00:19:51.556 "params": { 00:19:51.556 "name": "Nvme$subsystem", 00:19:51.556 "trtype": "$TEST_TRANSPORT", 00:19:51.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.556 "adrfam": "ipv4", 00:19:51.556 "trsvcid": "$NVMF_PORT", 00:19:51.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.556 "hdgst": ${hdgst:-false}, 00:19:51.556 "ddgst": ${ddgst:-false} 00:19:51.556 }, 00:19:51.556 "method": "bdev_nvme_attach_controller" 00:19:51.556 } 00:19:51.556 EOF 00:19:51.556 )") 00:19:51.556 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.815 { 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme$subsystem", 00:19:51.815 "trtype": "$TEST_TRANSPORT", 00:19:51.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "$NVMF_PORT", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.815 "hdgst": ${hdgst:-false}, 00:19:51.815 "ddgst": ${ddgst:-false} 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 } 00:19:51.815 EOF 00:19:51.815 )") 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.815 { 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme$subsystem", 00:19:51.815 "trtype": "$TEST_TRANSPORT", 00:19:51.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "$NVMF_PORT", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.815 "hdgst": ${hdgst:-false}, 00:19:51.815 "ddgst": ${ddgst:-false} 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 } 00:19:51.815 EOF 00:19:51.815 )") 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.815 { 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme$subsystem", 00:19:51.815 "trtype": "$TEST_TRANSPORT", 00:19:51.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "$NVMF_PORT", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.815 "hdgst": ${hdgst:-false}, 00:19:51.815 "ddgst": ${ddgst:-false} 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 } 00:19:51.815 EOF 00:19:51.815 )") 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.815 { 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme$subsystem", 00:19:51.815 "trtype": "$TEST_TRANSPORT", 00:19:51.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "$NVMF_PORT", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.815 "hdgst": ${hdgst:-false}, 00:19:51.815 "ddgst": ${ddgst:-false} 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 } 00:19:51.815 EOF 00:19:51.815 )") 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.815 { 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme$subsystem", 00:19:51.815 "trtype": "$TEST_TRANSPORT", 00:19:51.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "$NVMF_PORT", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.815 "hdgst": ${hdgst:-false}, 00:19:51.815 "ddgst": ${ddgst:-false} 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 } 00:19:51.815 EOF 00:19:51.815 )") 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:51.815 04:20:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme1", 00:19:51.815 "trtype": "tcp", 00:19:51.815 "traddr": "10.0.0.2", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "4420", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.815 "hdgst": false, 00:19:51.815 "ddgst": false 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 },{ 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme2", 00:19:51.815 "trtype": "tcp", 00:19:51.815 "traddr": "10.0.0.2", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "4420", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:51.815 "hdgst": false, 00:19:51.815 "ddgst": false 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 },{ 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme3", 00:19:51.815 "trtype": "tcp", 00:19:51.815 "traddr": "10.0.0.2", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "4420", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:51.815 "hdgst": false, 00:19:51.815 "ddgst": false 00:19:51.815 }, 00:19:51.815 "method": "bdev_nvme_attach_controller" 00:19:51.815 },{ 00:19:51.815 "params": { 00:19:51.815 "name": "Nvme4", 00:19:51.815 "trtype": "tcp", 00:19:51.815 "traddr": "10.0.0.2", 00:19:51.815 "adrfam": "ipv4", 00:19:51.815 "trsvcid": "4420", 00:19:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:51.815 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:51.815 "hdgst": false, 00:19:51.815 "ddgst": false 00:19:51.816 }, 00:19:51.816 "method": "bdev_nvme_attach_controller" 00:19:51.816 },{ 00:19:51.816 "params": { 00:19:51.816 "name": "Nvme5", 00:19:51.816 "trtype": "tcp", 00:19:51.816 "traddr": "10.0.0.2", 00:19:51.816 "adrfam": "ipv4", 00:19:51.816 "trsvcid": "4420", 00:19:51.816 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:51.816 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:51.816 "hdgst": false, 00:19:51.816 "ddgst": false 00:19:51.816 }, 00:19:51.816 "method": "bdev_nvme_attach_controller" 00:19:51.816 },{ 00:19:51.816 "params": { 00:19:51.816 "name": "Nvme6", 00:19:51.816 "trtype": "tcp", 00:19:51.816 "traddr": "10.0.0.2", 00:19:51.816 "adrfam": "ipv4", 00:19:51.816 "trsvcid": "4420", 00:19:51.816 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:51.816 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:51.816 "hdgst": false, 00:19:51.816 "ddgst": false 00:19:51.816 }, 00:19:51.816 "method": "bdev_nvme_attach_controller" 00:19:51.816 },{ 00:19:51.816 "params": { 00:19:51.816 "name": "Nvme7", 00:19:51.816 "trtype": "tcp", 00:19:51.816 "traddr": "10.0.0.2", 00:19:51.816 "adrfam": "ipv4", 00:19:51.816 "trsvcid": "4420", 00:19:51.816 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:51.816 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:51.816 "hdgst": false, 00:19:51.816 "ddgst": false 00:19:51.816 }, 00:19:51.816 "method": "bdev_nvme_attach_controller" 00:19:51.816 },{ 00:19:51.816 "params": { 00:19:51.816 "name": "Nvme8", 00:19:51.816 "trtype": "tcp", 00:19:51.816 "traddr": "10.0.0.2", 00:19:51.816 "adrfam": "ipv4", 00:19:51.816 "trsvcid": "4420", 00:19:51.816 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:51.816 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:51.816 "hdgst": false, 00:19:51.816 "ddgst": false 00:19:51.816 }, 00:19:51.816 "method": "bdev_nvme_attach_controller" 00:19:51.816 },{ 00:19:51.816 "params": { 00:19:51.816 "name": "Nvme9", 00:19:51.816 "trtype": "tcp", 00:19:51.816 "traddr": "10.0.0.2", 00:19:51.816 "adrfam": "ipv4", 00:19:51.816 "trsvcid": "4420", 00:19:51.816 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:51.816 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:51.816 "hdgst": false, 00:19:51.816 "ddgst": false 00:19:51.816 }, 00:19:51.816 "method": "bdev_nvme_attach_controller" 00:19:51.816 },{ 00:19:51.816 "params": { 00:19:51.816 "name": "Nvme10", 00:19:51.816 "trtype": "tcp", 00:19:51.816 "traddr": "10.0.0.2", 00:19:51.816 "adrfam": "ipv4", 00:19:51.816 "trsvcid": "4420", 00:19:51.816 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:51.816 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:51.816 "hdgst": false, 00:19:51.816 "ddgst": false 00:19:51.816 }, 00:19:51.816 "method": "bdev_nvme_attach_controller" 00:19:51.816 }' 00:19:51.816 [2024-05-15 04:20:39.595189] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:51.816 [2024-05-15 04:20:39.595303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:51.816 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.816 [2024-05-15 04:20:39.669804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.816 [2024-05-15 04:20:39.780526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3422511 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:53.190 04:20:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:54.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3422511 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3422320 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.124 { 00:19:54.124 "params": { 00:19:54.124 "name": "Nvme$subsystem", 00:19:54.124 "trtype": "$TEST_TRANSPORT", 00:19:54.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.124 "adrfam": "ipv4", 00:19:54.124 "trsvcid": "$NVMF_PORT", 00:19:54.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.124 "hdgst": ${hdgst:-false}, 00:19:54.124 "ddgst": ${ddgst:-false} 00:19:54.124 }, 00:19:54.124 "method": "bdev_nvme_attach_controller" 00:19:54.124 } 00:19:54.124 EOF 00:19:54.124 )") 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.124 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.124 { 00:19:54.124 "params": { 00:19:54.124 "name": "Nvme$subsystem", 00:19:54.124 "trtype": "$TEST_TRANSPORT", 00:19:54.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.124 "adrfam": "ipv4", 00:19:54.124 "trsvcid": "$NVMF_PORT", 00:19:54.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.124 "hdgst": ${hdgst:-false}, 00:19:54.124 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.125 { 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme$subsystem", 00:19:54.125 "trtype": "$TEST_TRANSPORT", 00:19:54.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "$NVMF_PORT", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.125 "hdgst": ${hdgst:-false}, 00:19:54.125 "ddgst": ${ddgst:-false} 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 } 00:19:54.125 EOF 00:19:54.125 )") 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:54.125 04:20:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme1", 00:19:54.125 "trtype": "tcp", 00:19:54.125 "traddr": "10.0.0.2", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "4420", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.125 "hdgst": false, 00:19:54.125 "ddgst": false 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 },{ 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme2", 00:19:54.125 "trtype": "tcp", 00:19:54.125 "traddr": "10.0.0.2", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "4420", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:54.125 "hdgst": false, 00:19:54.125 "ddgst": false 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 },{ 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme3", 00:19:54.125 "trtype": "tcp", 00:19:54.125 "traddr": "10.0.0.2", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "4420", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:54.125 "hdgst": false, 00:19:54.125 "ddgst": false 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 },{ 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme4", 00:19:54.125 "trtype": "tcp", 00:19:54.125 "traddr": "10.0.0.2", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "4420", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:54.125 "hdgst": false, 00:19:54.125 "ddgst": false 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 },{ 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme5", 00:19:54.125 "trtype": "tcp", 00:19:54.125 "traddr": "10.0.0.2", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "4420", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:54.125 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:54.125 "hdgst": false, 00:19:54.125 "ddgst": false 00:19:54.125 }, 00:19:54.125 "method": "bdev_nvme_attach_controller" 00:19:54.125 },{ 00:19:54.125 "params": { 00:19:54.125 "name": "Nvme6", 00:19:54.125 "trtype": "tcp", 00:19:54.125 "traddr": "10.0.0.2", 00:19:54.125 "adrfam": "ipv4", 00:19:54.125 "trsvcid": "4420", 00:19:54.125 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:54.126 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:54.126 "hdgst": false, 00:19:54.126 "ddgst": false 00:19:54.126 }, 00:19:54.126 "method": "bdev_nvme_attach_controller" 00:19:54.126 },{ 00:19:54.126 "params": { 00:19:54.126 "name": "Nvme7", 00:19:54.126 "trtype": "tcp", 00:19:54.126 "traddr": "10.0.0.2", 00:19:54.126 "adrfam": "ipv4", 00:19:54.126 "trsvcid": "4420", 00:19:54.126 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:54.126 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:54.126 "hdgst": false, 00:19:54.126 "ddgst": false 00:19:54.126 }, 00:19:54.126 "method": "bdev_nvme_attach_controller" 00:19:54.126 },{ 00:19:54.126 "params": { 00:19:54.126 "name": "Nvme8", 00:19:54.126 "trtype": "tcp", 00:19:54.126 "traddr": "10.0.0.2", 00:19:54.126 "adrfam": "ipv4", 00:19:54.126 "trsvcid": "4420", 00:19:54.126 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:54.126 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:54.126 "hdgst": false, 00:19:54.126 "ddgst": false 00:19:54.126 }, 00:19:54.126 "method": "bdev_nvme_attach_controller" 00:19:54.126 },{ 00:19:54.126 "params": { 00:19:54.126 "name": "Nvme9", 00:19:54.126 "trtype": "tcp", 00:19:54.126 "traddr": "10.0.0.2", 00:19:54.126 "adrfam": "ipv4", 00:19:54.126 "trsvcid": "4420", 00:19:54.126 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:54.126 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:54.126 "hdgst": false, 00:19:54.126 "ddgst": false 00:19:54.126 }, 00:19:54.126 "method": "bdev_nvme_attach_controller" 00:19:54.126 },{ 00:19:54.126 "params": { 00:19:54.126 "name": "Nvme10", 00:19:54.126 "trtype": "tcp", 00:19:54.126 "traddr": "10.0.0.2", 00:19:54.126 "adrfam": "ipv4", 00:19:54.126 "trsvcid": "4420", 00:19:54.126 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:54.126 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:54.126 "hdgst": false, 00:19:54.126 "ddgst": false 00:19:54.126 }, 00:19:54.126 "method": "bdev_nvme_attach_controller" 00:19:54.126 }' 00:19:54.126 [2024-05-15 04:20:42.080891] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:54.126 [2024-05-15 04:20:42.081009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422808 ] 00:19:54.126 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.384 [2024-05-15 04:20:42.157987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.384 [2024-05-15 04:20:42.267691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.753 Running I/O for 1 seconds... 00:19:56.688 00:19:56.688 Latency(us) 00:19:56.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.688 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme1n1 : 1.11 230.96 14.43 0.00 0.00 272326.16 21068.61 254765.13 00:19:56.688 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme2n1 : 1.12 228.52 14.28 0.00 0.00 272777.10 22136.60 242337.56 00:19:56.688 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme3n1 : 1.14 228.99 14.31 0.00 0.00 266300.94 9514.86 257872.02 00:19:56.688 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme4n1 : 1.12 229.34 14.33 0.00 0.00 262380.85 22622.06 288940.94 00:19:56.688 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme5n1 : 1.13 227.28 14.20 0.00 0.00 259709.16 23981.32 239230.67 00:19:56.688 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme6n1 : 1.17 218.18 13.64 0.00 0.00 267062.99 26408.58 284280.60 00:19:56.688 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme7n1 : 1.18 272.14 17.01 0.00 0.00 210335.67 22233.69 256318.58 00:19:56.688 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme8n1 : 1.14 225.07 14.07 0.00 0.00 248946.16 20194.80 257872.02 00:19:56.688 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme9n1 : 1.20 266.40 16.65 0.00 0.00 208494.59 12718.84 259425.47 00:19:56.688 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.688 Verification LBA range: start 0x0 length 0x400 00:19:56.688 Nvme10n1 : 1.18 216.36 13.52 0.00 0.00 251576.89 22719.15 302921.96 00:19:56.688 =================================================================================================================== 00:19:56.688 Total : 2343.23 146.45 0.00 0.00 249993.96 9514.86 302921.96 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.947 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.947 rmmod nvme_tcp 00:19:56.947 rmmod nvme_fabrics 00:19:56.947 rmmod nvme_keyring 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3422320 ']' 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3422320 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3422320 ']' 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3422320 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:57.206 04:20:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3422320 00:19:57.206 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:57.206 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:57.206 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3422320' 00:19:57.206 killing process with pid 3422320 00:19:57.206 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3422320 00:19:57.206 [2024-05-15 04:20:45.005773] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:57.206 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3422320 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.774 04:20:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.683 00:19:59.683 real 0m12.200s 00:19:59.683 user 0m33.121s 00:19:59.683 sys 0m3.572s 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:59.683 ************************************ 00:19:59.683 END TEST nvmf_shutdown_tc1 00:19:59.683 ************************************ 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:59.683 ************************************ 00:19:59.683 START TEST nvmf_shutdown_tc2 00:19:59.683 ************************************ 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.683 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.684 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.684 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:19:59.944 00:19:59.944 --- 10.0.0.2 ping statistics --- 00:19:59.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.944 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:59.944 00:19:59.944 --- 10.0.0.1 ping statistics --- 00:19:59.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.944 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3423575 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3423575 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3423575 ']' 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.944 04:20:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.944 [2024-05-15 04:20:47.857853] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:19:59.944 [2024-05-15 04:20:47.857959] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.944 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.944 [2024-05-15 04:20:47.940022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.203 [2024-05-15 04:20:48.063121] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.203 [2024-05-15 04:20:48.063172] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.203 [2024-05-15 04:20:48.063202] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.203 [2024-05-15 04:20:48.063214] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.203 [2024-05-15 04:20:48.063225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.203 [2024-05-15 04:20:48.063336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.203 [2024-05-15 04:20:48.063454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.203 [2024-05-15 04:20:48.063531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:00.203 [2024-05-15 04:20:48.063533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.170 [2024-05-15 04:20:48.826871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.170 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.171 04:20:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.171 Malloc1 00:20:01.171 [2024-05-15 04:20:48.901530] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:01.171 [2024-05-15 04:20:48.901838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.171 Malloc2 00:20:01.171 Malloc3 00:20:01.171 Malloc4 00:20:01.171 Malloc5 00:20:01.171 Malloc6 00:20:01.434 Malloc7 00:20:01.434 Malloc8 00:20:01.434 Malloc9 00:20:01.434 Malloc10 00:20:01.434 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.434 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:01.434 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.434 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.434 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3423877 00:20:01.434 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3423877 /var/tmp/bdevperf.sock 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3423877 ']' 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.435 "trsvcid": "$NVMF_PORT", 00:20:01.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.435 "hdgst": ${hdgst:-false}, 00:20:01.435 "ddgst": ${ddgst:-false} 00:20:01.435 }, 00:20:01.435 "method": "bdev_nvme_attach_controller" 00:20:01.435 } 00:20:01.435 EOF 00:20:01.435 )") 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.435 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.435 { 00:20:01.435 "params": { 00:20:01.435 "name": "Nvme$subsystem", 00:20:01.435 "trtype": "$TEST_TRANSPORT", 00:20:01.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.435 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "$NVMF_PORT", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.436 "hdgst": ${hdgst:-false}, 00:20:01.436 "ddgst": ${ddgst:-false} 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 } 00:20:01.436 EOF 00:20:01.436 )") 00:20:01.436 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.436 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.436 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.436 { 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme$subsystem", 00:20:01.436 "trtype": "$TEST_TRANSPORT", 00:20:01.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "$NVMF_PORT", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.436 "hdgst": ${hdgst:-false}, 00:20:01.436 "ddgst": ${ddgst:-false} 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 } 00:20:01.436 EOF 00:20:01.436 )") 00:20:01.436 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:01.436 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:01.436 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:01.436 04:20:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme1", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme2", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme3", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme4", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme5", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme6", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme7", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme8", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme9", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 },{ 00:20:01.436 "params": { 00:20:01.436 "name": "Nvme10", 00:20:01.436 "trtype": "tcp", 00:20:01.436 "traddr": "10.0.0.2", 00:20:01.436 "adrfam": "ipv4", 00:20:01.436 "trsvcid": "4420", 00:20:01.436 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:01.436 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:01.436 "hdgst": false, 00:20:01.436 "ddgst": false 00:20:01.436 }, 00:20:01.436 "method": "bdev_nvme_attach_controller" 00:20:01.436 }' 00:20:01.436 [2024-05-15 04:20:49.408183] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:01.436 [2024-05-15 04:20:49.408276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423877 ] 00:20:01.436 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.695 [2024-05-15 04:20:49.481822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.695 [2024-05-15 04:20:49.592060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.596 Running I/O for 10 seconds... 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:03.596 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:03.597 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:03.855 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:03.855 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:03.855 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:03.855 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.856 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.856 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.856 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.856 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:03.856 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:03.856 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3423877 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3423877 ']' 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3423877 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:04.114 04:20:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3423877 00:20:04.114 04:20:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:04.114 04:20:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:04.114 04:20:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3423877' 00:20:04.114 killing process with pid 3423877 00:20:04.114 04:20:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3423877 00:20:04.114 04:20:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3423877 00:20:04.373 Received shutdown signal, test time was about 0.951287 seconds 00:20:04.373 00:20:04.373 Latency(us) 00:20:04.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.373 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.373 Verification LBA range: start 0x0 length 0x400 00:20:04.373 Nvme1n1 : 0.91 210.74 13.17 0.00 0.00 299861.33 21942.42 301368.51 00:20:04.373 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.373 Verification LBA range: start 0x0 length 0x400 00:20:04.373 Nvme2n1 : 0.91 212.06 13.25 0.00 0.00 291641.77 21165.70 278066.82 00:20:04.373 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.373 Verification LBA range: start 0x0 length 0x400 00:20:04.373 Nvme3n1 : 0.93 207.23 12.95 0.00 0.00 293162.35 22330.79 299815.06 00:20:04.374 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.374 Verification LBA range: start 0x0 length 0x400 00:20:04.374 Nvme4n1 : 0.95 202.03 12.63 0.00 0.00 294447.79 22719.15 320009.86 00:20:04.374 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.374 Verification LBA range: start 0x0 length 0x400 00:20:04.374 Nvme5n1 : 0.89 219.10 13.69 0.00 0.00 263964.94 4102.07 260978.92 00:20:04.374 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.374 Verification LBA range: start 0x0 length 0x400 00:20:04.374 Nvme6n1 : 0.95 202.65 12.67 0.00 0.00 281761.82 18641.35 332437.43 00:20:04.374 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.374 Verification LBA range: start 0x0 length 0x400 00:20:04.374 Nvme7n1 : 0.94 203.18 12.70 0.00 0.00 274972.07 32428.18 299815.06 00:20:04.374 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.374 Verification LBA range: start 0x0 length 0x400 00:20:04.374 Nvme8n1 : 0.94 204.29 12.77 0.00 0.00 267212.42 43108.12 226803.11 00:20:04.374 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.374 Verification LBA range: start 0x0 length 0x400 00:20:04.374 Nvme9n1 : 0.93 205.89 12.87 0.00 0.00 258985.53 27185.30 265639.25 00:20:04.374 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.374 Verification LBA range: start 0x0 length 0x400 00:20:04.374 Nvme10n1 : 0.88 145.35 9.08 0.00 0.00 353427.91 23495.87 351078.78 00:20:04.374 =================================================================================================================== 00:20:04.374 Total : 2012.53 125.78 0.00 0.00 285639.01 4102.07 351078.78 00:20:04.632 04:20:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3423575 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.565 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.566 rmmod nvme_tcp 00:20:05.566 rmmod nvme_fabrics 00:20:05.566 rmmod nvme_keyring 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3423575 ']' 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3423575 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3423575 ']' 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3423575 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3423575 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3423575' 00:20:05.566 killing process with pid 3423575 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3423575 00:20:05.566 [2024-05-15 04:20:53.511577] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:05.566 04:20:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3423575 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.132 04:20:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.664 00:20:08.664 real 0m8.439s 00:20:08.664 user 0m25.808s 00:20:08.664 sys 0m1.658s 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.664 ************************************ 00:20:08.664 END TEST nvmf_shutdown_tc2 00:20:08.664 ************************************ 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:08.664 ************************************ 00:20:08.664 START TEST nvmf_shutdown_tc3 00:20:08.664 ************************************ 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.664 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:08.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:08.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:08.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:08.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:20:08.665 00:20:08.665 --- 10.0.0.2 ping statistics --- 00:20:08.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.665 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:20:08.665 00:20:08.665 --- 10.0.0.1 ping statistics --- 00:20:08.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.665 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3424806 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:08.665 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3424806 00:20:08.666 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3424806 ']' 00:20:08.666 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.666 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:08.666 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.666 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:08.666 04:20:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.666 [2024-05-15 04:20:56.337153] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:08.666 [2024-05-15 04:20:56.337247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.666 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.666 [2024-05-15 04:20:56.412858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.666 [2024-05-15 04:20:56.524071] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.666 [2024-05-15 04:20:56.524128] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.666 [2024-05-15 04:20:56.524155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.666 [2024-05-15 04:20:56.524166] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.666 [2024-05-15 04:20:56.524175] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.666 [2024-05-15 04:20:56.524259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.666 [2024-05-15 04:20:56.524324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.666 [2024-05-15 04:20:56.524391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:08.666 [2024-05-15 04:20:56.524394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 [2024-05-15 04:20:57.299848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.599 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 Malloc1 00:20:09.599 [2024-05-15 04:20:57.387703] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:09.599 [2024-05-15 04:20:57.388034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.599 Malloc2 00:20:09.599 Malloc3 00:20:09.599 Malloc4 00:20:09.599 Malloc5 00:20:09.599 Malloc6 00:20:09.858 Malloc7 00:20:09.858 Malloc8 00:20:09.858 Malloc9 00:20:09.858 Malloc10 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3424996 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3424996 /var/tmp/bdevperf.sock 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3424996 ']' 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:09.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.858 { 00:20:09.858 "params": { 00:20:09.858 "name": "Nvme$subsystem", 00:20:09.858 "trtype": "$TEST_TRANSPORT", 00:20:09.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.858 "adrfam": "ipv4", 00:20:09.858 "trsvcid": "$NVMF_PORT", 00:20:09.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.858 "hdgst": ${hdgst:-false}, 00:20:09.858 "ddgst": ${ddgst:-false} 00:20:09.858 }, 00:20:09.858 "method": "bdev_nvme_attach_controller" 00:20:09.858 } 00:20:09.858 EOF 00:20:09.858 )") 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.858 { 00:20:09.858 "params": { 00:20:09.858 "name": "Nvme$subsystem", 00:20:09.858 "trtype": "$TEST_TRANSPORT", 00:20:09.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.858 "adrfam": "ipv4", 00:20:09.858 "trsvcid": "$NVMF_PORT", 00:20:09.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.858 "hdgst": ${hdgst:-false}, 00:20:09.858 "ddgst": ${ddgst:-false} 00:20:09.858 }, 00:20:09.858 "method": "bdev_nvme_attach_controller" 00:20:09.858 } 00:20:09.858 EOF 00:20:09.858 )") 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.858 { 00:20:09.858 "params": { 00:20:09.858 "name": "Nvme$subsystem", 00:20:09.858 "trtype": "$TEST_TRANSPORT", 00:20:09.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.858 "adrfam": "ipv4", 00:20:09.858 "trsvcid": "$NVMF_PORT", 00:20:09.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.858 "hdgst": ${hdgst:-false}, 00:20:09.858 "ddgst": ${ddgst:-false} 00:20:09.858 }, 00:20:09.858 "method": "bdev_nvme_attach_controller" 00:20:09.858 } 00:20:09.858 EOF 00:20:09.858 )") 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.858 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.858 { 00:20:09.858 "params": { 00:20:09.858 "name": "Nvme$subsystem", 00:20:09.858 "trtype": "$TEST_TRANSPORT", 00:20:09.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.858 "adrfam": "ipv4", 00:20:09.859 "trsvcid": "$NVMF_PORT", 00:20:09.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.859 "hdgst": ${hdgst:-false}, 00:20:09.859 "ddgst": ${ddgst:-false} 00:20:09.859 }, 00:20:09.859 "method": "bdev_nvme_attach_controller" 00:20:09.859 } 00:20:09.859 EOF 00:20:09.859 )") 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.859 { 00:20:09.859 "params": { 00:20:09.859 "name": "Nvme$subsystem", 00:20:09.859 "trtype": "$TEST_TRANSPORT", 00:20:09.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.859 "adrfam": "ipv4", 00:20:09.859 "trsvcid": "$NVMF_PORT", 00:20:09.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.859 "hdgst": ${hdgst:-false}, 00:20:09.859 "ddgst": ${ddgst:-false} 00:20:09.859 }, 00:20:09.859 "method": "bdev_nvme_attach_controller" 00:20:09.859 } 00:20:09.859 EOF 00:20:09.859 )") 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.859 { 00:20:09.859 "params": { 00:20:09.859 "name": "Nvme$subsystem", 00:20:09.859 "trtype": "$TEST_TRANSPORT", 00:20:09.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.859 "adrfam": "ipv4", 00:20:09.859 "trsvcid": "$NVMF_PORT", 00:20:09.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.859 "hdgst": ${hdgst:-false}, 00:20:09.859 "ddgst": ${ddgst:-false} 00:20:09.859 }, 00:20:09.859 "method": "bdev_nvme_attach_controller" 00:20:09.859 } 00:20:09.859 EOF 00:20:09.859 )") 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.859 { 00:20:09.859 "params": { 00:20:09.859 "name": "Nvme$subsystem", 00:20:09.859 "trtype": "$TEST_TRANSPORT", 00:20:09.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.859 "adrfam": "ipv4", 00:20:09.859 "trsvcid": "$NVMF_PORT", 00:20:09.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.859 "hdgst": ${hdgst:-false}, 00:20:09.859 "ddgst": ${ddgst:-false} 00:20:09.859 }, 00:20:09.859 "method": "bdev_nvme_attach_controller" 00:20:09.859 } 00:20:09.859 EOF 00:20:09.859 )") 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.859 { 00:20:09.859 "params": { 00:20:09.859 "name": "Nvme$subsystem", 00:20:09.859 "trtype": "$TEST_TRANSPORT", 00:20:09.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.859 "adrfam": "ipv4", 00:20:09.859 "trsvcid": "$NVMF_PORT", 00:20:09.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.859 "hdgst": ${hdgst:-false}, 00:20:09.859 "ddgst": ${ddgst:-false} 00:20:09.859 }, 00:20:09.859 "method": "bdev_nvme_attach_controller" 00:20:09.859 } 00:20:09.859 EOF 00:20:09.859 )") 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.859 { 00:20:09.859 "params": { 00:20:09.859 "name": "Nvme$subsystem", 00:20:09.859 "trtype": "$TEST_TRANSPORT", 00:20:09.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.859 "adrfam": "ipv4", 00:20:09.859 "trsvcid": "$NVMF_PORT", 00:20:09.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.859 "hdgst": ${hdgst:-false}, 00:20:09.859 "ddgst": ${ddgst:-false} 00:20:09.859 }, 00:20:09.859 "method": "bdev_nvme_attach_controller" 00:20:09.859 } 00:20:09.859 EOF 00:20:09.859 )") 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.859 { 00:20:09.859 "params": { 00:20:09.859 "name": "Nvme$subsystem", 00:20:09.859 "trtype": "$TEST_TRANSPORT", 00:20:09.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.859 "adrfam": "ipv4", 00:20:09.859 "trsvcid": "$NVMF_PORT", 00:20:09.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.859 "hdgst": ${hdgst:-false}, 00:20:09.859 "ddgst": ${ddgst:-false} 00:20:09.859 }, 00:20:09.859 "method": "bdev_nvme_attach_controller" 00:20:09.859 } 00:20:09.859 EOF 00:20:09.859 )") 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:09.859 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:10.117 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:10.117 04:20:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme1", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme2", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme3", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme4", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme5", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme6", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme7", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme8", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:10.117 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:10.117 "hdgst": false, 00:20:10.117 "ddgst": false 00:20:10.117 }, 00:20:10.117 "method": "bdev_nvme_attach_controller" 00:20:10.117 },{ 00:20:10.117 "params": { 00:20:10.117 "name": "Nvme9", 00:20:10.117 "trtype": "tcp", 00:20:10.117 "traddr": "10.0.0.2", 00:20:10.117 "adrfam": "ipv4", 00:20:10.117 "trsvcid": "4420", 00:20:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:10.118 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:10.118 "hdgst": false, 00:20:10.118 "ddgst": false 00:20:10.118 }, 00:20:10.118 "method": "bdev_nvme_attach_controller" 00:20:10.118 },{ 00:20:10.118 "params": { 00:20:10.118 "name": "Nvme10", 00:20:10.118 "trtype": "tcp", 00:20:10.118 "traddr": "10.0.0.2", 00:20:10.118 "adrfam": "ipv4", 00:20:10.118 "trsvcid": "4420", 00:20:10.118 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:10.118 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:10.118 "hdgst": false, 00:20:10.118 "ddgst": false 00:20:10.118 }, 00:20:10.118 "method": "bdev_nvme_attach_controller" 00:20:10.118 }' 00:20:10.118 [2024-05-15 04:20:57.882009] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:10.118 [2024-05-15 04:20:57.882089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424996 ] 00:20:10.118 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.118 [2024-05-15 04:20:57.956513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.118 [2024-05-15 04:20:58.066761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.021 Running I/O for 10 seconds... 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:12.021 04:20:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.021 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:12.021 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:12.021 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:12.284 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:12.284 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:12.284 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:12.284 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:12.284 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.284 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:12.284 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.542 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:12.542 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:12.542 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:12.542 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:12.542 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:12.818 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:12.818 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:12.818 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.818 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:12.818 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.818 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3424806 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3424806 ']' 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3424806 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3424806 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3424806' 00:20:12.819 killing process with pid 3424806 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3424806 00:20:12.819 04:21:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3424806 00:20:12.819 [2024-05-15 04:21:00.622653] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:12.819 [2024-05-15 04:21:00.623145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7ef0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.623179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7ef0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.623194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7ef0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.623996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.624791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa8b0 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.626006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.626035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.626049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.626062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.626073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.626085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.819 [2024-05-15 04:21:00.626097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.626756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8390 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.820 [2024-05-15 04:21:00.628481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.628907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8830 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.821 [2024-05-15 04:21:00.630663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.630828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8cd0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.631875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.631898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.631926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.631956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.631970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.631983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.631993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.631996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 04:21:00.632008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with [2024-05-15 04:21:00.632022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143210 is same the state(5) to be set 00:20:12.822 with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 04:21:00.632113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:12.822 the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.632139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.632151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.632164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-05-15 04:21:00.632176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:12.822 the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 04:21:00.632189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with [2024-05-15 04:21:00.632205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:20:12.822 id:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.632230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with [2024-05-15 04:21:00.632231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:20:12.822 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.632244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8d300 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.632306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.632341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.632355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.632369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.632326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.632391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.822 [2024-05-15 04:21:00.632405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.822 [2024-05-15 04:21:00.632418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e7c0 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.822 [2024-05-15 04:21:00.632443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa26b0 is same w[2024-05-15 04:21:00.632708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with ith the state(5) to be set 00:20:12.823 the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9170 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.632757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.823 [2024-05-15 04:21:00.632861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.823 [2024-05-15 04:21:00.632874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148d10 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.823 [2024-05-15 04:21:00.634420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.634887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9630 is same with the state(5) to be set 00:20:12.824 [2024-05-15 04:21:00.635520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.635984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.635998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.824 [2024-05-15 04:21:00.636319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.824 [2024-05-15 04:21:00.636333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.636970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.636985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.825 [2024-05-15 04:21:00.637503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.825 [2024-05-15 04:21:00.637591] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10e92a0 was disconnected and freed. reset controller. 00:20:12.825 [2024-05-15 04:21:00.640531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:12.825 [2024-05-15 04:21:00.640573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e7c0 (9): Bad file descriptor 00:20:12.825 [2024-05-15 04:21:00.640637] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:12.825 [2024-05-15 04:21:00.640699] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:12.826 [2024-05-15 04:21:00.640962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.640986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.641972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.641986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.826 [2024-05-15 04:21:00.642203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.826 [2024-05-15 04:21:00.642217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.827 [2024-05-15 04:21:00.642904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.642997] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18a3d20 was disconnected and freed. reset controller. 00:20:12.827 [2024-05-15 04:21:00.643180] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:12.827 [2024-05-15 04:21:00.643267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1150d50 is same with the state(5) to be set 00:20:12.827 [2024-05-15 04:21:00.643431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1143210 (9): Bad file descriptor 00:20:12.827 [2024-05-15 04:21:00.643488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8d300 (9): Bad file descriptor 00:20:12.827 [2024-05-15 04:21:00.643540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.827 [2024-05-15 04:21:00.643652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.827 [2024-05-15 04:21:00.643665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc51f0 is same with the state(5) to be set 00:20:12.827 [2024-05-15 04:21:00.643687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa26b0 (9): Bad file descriptor 00:20:12.827 [2024-05-15 04:21:00.643716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1148d10 (9): Bad file descriptor 00:20:12.828 [2024-05-15 04:21:00.643763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.828 [2024-05-15 04:21:00.643783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.828 [2024-05-15 04:21:00.643798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.828 [2024-05-15 04:21:00.643812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.828 [2024-05-15 04:21:00.643827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.828 [2024-05-15 04:21:00.643840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.828 [2024-05-15 04:21:00.643854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.828 [2024-05-15 04:21:00.643867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.828 [2024-05-15 04:21:00.643880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9730 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.644297] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:12.828 [2024-05-15 04:21:00.644372] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:12.828 [2024-05-15 04:21:00.645676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:12.828 [2024-05-15 04:21:00.645709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc51f0 (9): Bad file descriptor 00:20:12.828 [2024-05-15 04:21:00.645947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.828 [2024-05-15 04:21:00.646119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.828 [2024-05-15 04:21:00.646145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7e7c0 with addr=10.0.0.2, port=4420 00:20:12.828 [2024-05-15 04:21:00.646162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e7c0 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.646278] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:12.828 [2024-05-15 04:21:00.646459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e7c0 (9): Bad file descriptor 00:20:12.828 [2024-05-15 04:21:00.647071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.828 [2024-05-15 04:21:00.647505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.828 [2024-05-15 04:21:00.647535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc51f0 with addr=10.0.0.2, port=4420 00:20:12.828 [2024-05-15 04:21:00.647562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc51f0 is same w[2024-05-15 04:21:00.647575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with ith the state(5) to be set 00:20:12.828 the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:12.828 [2024-05-15 04:21:00.647601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:12.828 [2024-05-15 04:21:00.647613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:12.828 [2024-05-15 04:21:00.647625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.828 [2024-05-15 04:21:00.647805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc51f0 (9): Bad file descriptor 00:20:12.828 [2024-05-15 04:21:00.647828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.647997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:12.828 [2024-05-15 04:21:00.648033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:12.828 [2024-05-15 04:21:00.648045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:12.828 [2024-05-15 04:21:00.648057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.828 [2024-05-15 04:21:00.648207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.829 [2024-05-15 04:21:00.648220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.648237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.648248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.648260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.648272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.648286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f70 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.648985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128[2024-05-15 04:21:00.649262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.649352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:12.829 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128[2024-05-15 04:21:00.649398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 04:21:00.649446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 04:21:00.649510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.649621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12the state(5) to be set 00:20:12.829 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.649673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:12.829 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12[2024-05-15 04:21:00.649778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.649828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:12.829 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.829 [2024-05-15 04:21:00.649845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.829 [2024-05-15 04:21:00.649866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.829 [2024-05-15 04:21:00.649881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.649889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.649897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.649910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.649923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12the state(5) to be set 00:20:12.830 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.649951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.649953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.649967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.649976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.649982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:12.830 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.649999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.649998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.650089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12the state(5) to be set 00:20:12.830 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-05-15 04:21:00.650214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.650250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12the state(5) to be set 00:20:12.830 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with [2024-05-15 04:21:00.650295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:12.830 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 04:21:00.650382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa410 is same with the state(5) to be set 00:20:12.830 [2024-05-15 04:21:00.650418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.830 [2024-05-15 04:21:00.650773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.830 [2024-05-15 04:21:00.650789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.650803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.650818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.650832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.650854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.650869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.650885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.650899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.650924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.650946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.650962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.650976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.650992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.651285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.651300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4a6a0 is same with the state(5) to be set 00:20:12.831 [2024-05-15 04:21:00.651366] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a4a6a0 was disconnected and freed. reset controller. 00:20:12.831 [2024-05-15 04:21:00.652659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:12.831 [2024-05-15 04:21:00.652719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110efb0 (9): Bad file descriptor 00:20:12.831 [2024-05-15 04:21:00.653424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.831 [2024-05-15 04:21:00.653631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.831 [2024-05-15 04:21:00.653656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110efb0 with addr=10.0.0.2, port=4420 00:20:12.831 [2024-05-15 04:21:00.653672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efb0 is same with the state(5) to be set 00:20:12.831 [2024-05-15 04:21:00.653761] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:12.831 [2024-05-15 04:21:00.653798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110efb0 (9): Bad file descriptor 00:20:12.831 [2024-05-15 04:21:00.653823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1150d50 (9): Bad file descriptor 00:20:12.831 [2024-05-15 04:21:00.653881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.831 [2024-05-15 04:21:00.653903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.653923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.831 [2024-05-15 04:21:00.653948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.653969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.831 [2024-05-15 04:21:00.653983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.653998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.831 [2024-05-15 04:21:00.654011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d4f0 is same with the state(5) to be set 00:20:12.831 [2024-05-15 04:21:00.654071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9730 (9): Bad file descriptor 00:20:12.831 [2024-05-15 04:21:00.654196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:12.831 [2024-05-15 04:21:00.654218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:12.831 [2024-05-15 04:21:00.654241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:12.831 [2024-05-15 04:21:00.654280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.831 [2024-05-15 04:21:00.654695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.831 [2024-05-15 04:21:00.654710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.654973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.654986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.832 [2024-05-15 04:21:00.655724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.832 [2024-05-15 04:21:00.655738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.655981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.655997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.656227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.656242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064520 is same with the state(5) to be set 00:20:12.833 [2024-05-15 04:21:00.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.657980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.657994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.833 [2024-05-15 04:21:00.658270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.833 [2024-05-15 04:21:00.658284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.658968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.658983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.834 [2024-05-15 04:21:00.659525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.834 [2024-05-15 04:21:00.659540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5fda0 is same with the state(5) to be set 00:20:12.834 [2024-05-15 04:21:00.660788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.660811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.660832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.660848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.660864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.660878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.660894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.660908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.660925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.660947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.660968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.660983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.660998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.661984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.661998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.662013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.662027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.662043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.835 [2024-05-15 04:21:00.662057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.835 [2024-05-15 04:21:00.662072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.662742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.662756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c150 is same with the state(5) to be set 00:20:12.836 [2024-05-15 04:21:00.664065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-05-15 04:21:00.664440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.836 [2024-05-15 04:21:00.664456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.664983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.664997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.837 [2024-05-15 04:21:00.665672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.837 [2024-05-15 04:21:00.665686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.665996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.666012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.666025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.666040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe0c30 is same with the state(5) to be set 00:20:12.838 [2024-05-15 04:21:00.668387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:12.838 [2024-05-15 04:21:00.668420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.838 [2024-05-15 04:21:00.668438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:12.838 [2024-05-15 04:21:00.668461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:12.838 [2024-05-15 04:21:00.668479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:12.838 [2024-05-15 04:21:00.668604] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.838 [2024-05-15 04:21:00.668638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d4f0 (9): Bad file descriptor 00:20:12.838 [2024-05-15 04:21:00.668765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:12.838 [2024-05-15 04:21:00.669108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.669303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.669329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7e7c0 with addr=10.0.0.2, port=4420 00:20:12.838 [2024-05-15 04:21:00.669346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e7c0 is same with the state(5) to be set 00:20:12.838 [2024-05-15 04:21:00.669519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.669690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.669716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1148d10 with addr=10.0.0.2, port=4420 00:20:12.838 [2024-05-15 04:21:00.669732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148d10 is same with the state(5) to be set 00:20:12.838 [2024-05-15 04:21:00.669900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.670118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.670142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa26b0 with addr=10.0.0.2, port=4420 00:20:12.838 [2024-05-15 04:21:00.670158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa26b0 is same with the state(5) to be set 00:20:12.838 [2024-05-15 04:21:00.670501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.670673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.838 [2024-05-15 04:21:00.670697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d300 with addr=10.0.0.2, port=4420 00:20:12.838 [2024-05-15 04:21:00.670715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8d300 is same with the state(5) to be set 00:20:12.838 [2024-05-15 04:21:00.671610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.671970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.671985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.838 [2024-05-15 04:21:00.672240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.838 [2024-05-15 04:21:00.672256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.672972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.672993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.839 [2024-05-15 04:21:00.673509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.839 [2024-05-15 04:21:00.673523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.673539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.673553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.673573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.673587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.673603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.673618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.673633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.673648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.673663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78470 is same with the state(5) to be set 00:20:12.840 [2024-05-15 04:21:00.674958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.674982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.840 [2024-05-15 04:21:00.675919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.840 [2024-05-15 04:21:00.675940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.675968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.675982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.675997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.676976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.841 [2024-05-15 04:21:00.676990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.841 [2024-05-15 04:21:00.677005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf798e0 is same with the state(5) to be set 00:20:12.841 [2024-05-15 04:21:00.678529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:12.841 [2024-05-15 04:21:00.678567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:12.841 [2024-05-15 04:21:00.678588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:12.841 [2024-05-15 04:21:00.678606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:12.841 [2024-05-15 04:21:00.678943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.841 [2024-05-15 04:21:00.679134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.841 [2024-05-15 04:21:00.679162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1143210 with addr=10.0.0.2, port=4420 00:20:12.841 [2024-05-15 04:21:00.679180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143210 is same with the state(5) to be set 00:20:12.841 [2024-05-15 04:21:00.679207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e7c0 (9): Bad file descriptor 00:20:12.841 [2024-05-15 04:21:00.679234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1148d10 (9): Bad file descriptor 00:20:12.841 [2024-05-15 04:21:00.679253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa26b0 (9): Bad file descriptor 00:20:12.841 [2024-05-15 04:21:00.679271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8d300 (9): Bad file descriptor 00:20:12.841 [2024-05-15 04:21:00.679584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.841 [2024-05-15 04:21:00.679752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.841 [2024-05-15 04:21:00.679778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc51f0 with addr=10.0.0.2, port=4420 00:20:12.841 [2024-05-15 04:21:00.679794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc51f0 is same with the state(5) to be set 00:20:12.841 [2024-05-15 04:21:00.679965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.841 [2024-05-15 04:21:00.680130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.842 [2024-05-15 04:21:00.680154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110efb0 with addr=10.0.0.2, port=4420 00:20:12.842 [2024-05-15 04:21:00.680170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efb0 is same with the state(5) to be set 00:20:12.842 [2024-05-15 04:21:00.680332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.842 [2024-05-15 04:21:00.680509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.842 [2024-05-15 04:21:00.680535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9730 with addr=10.0.0.2, port=4420 00:20:12.842 [2024-05-15 04:21:00.680551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9730 is same with the state(5) to be set 00:20:12.842 [2024-05-15 04:21:00.680747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.842 [2024-05-15 04:21:00.680901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.842 [2024-05-15 04:21:00.680926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1150d50 with addr=10.0.0.2, port=4420 00:20:12.842 [2024-05-15 04:21:00.680949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1150d50 is same with the state(5) to be set 00:20:12.842 [2024-05-15 04:21:00.680969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1143210 (9): Bad file descriptor 00:20:12.842 [2024-05-15 04:21:00.680987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:12.842 [2024-05-15 04:21:00.681002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:12.842 [2024-05-15 04:21:00.681018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:12.842 [2024-05-15 04:21:00.681044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:12.842 [2024-05-15 04:21:00.681060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:12.842 [2024-05-15 04:21:00.681073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:12.842 [2024-05-15 04:21:00.681091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:12.842 [2024-05-15 04:21:00.681106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:12.842 [2024-05-15 04:21:00.681119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:12.842 [2024-05-15 04:21:00.681137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:12.842 [2024-05-15 04:21:00.681152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:12.842 [2024-05-15 04:21:00.681165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:12.842 [2024-05-15 04:21:00.681188] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.842 [2024-05-15 04:21:00.681209] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.842 [2024-05-15 04:21:00.681229] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.842 [2024-05-15 04:21:00.681247] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.842 [2024-05-15 04:21:00.681265] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.842 [2024-05-15 04:21:00.681859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.681883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.681910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.681941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.681960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.681975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.681992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.842 [2024-05-15 04:21:00.682778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.842 [2024-05-15 04:21:00.682793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.682809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.682823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.682839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.682853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.682869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.682882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.682903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.682926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.682950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.682965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.682981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.682996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.843 [2024-05-15 04:21:00.683890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.843 [2024-05-15 04:21:00.683905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdf720 is same with the state(5) to be set 00:20:12.843 [2024-05-15 04:21:00.685586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.843 [2024-05-15 04:21:00.685611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.843 [2024-05-15 04:21:00.685624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.843 [2024-05-15 04:21:00.685636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.843 task offset: 24576 on job bdev=Nvme1n1 fails 00:20:12.843 00:20:12.843 Latency(us) 00:20:12.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.843 Job: Nvme1n1 ended in about 0.87 seconds with error 00:20:12.843 Verification LBA range: start 0x0 length 0x400 00:20:12.843 Nvme1n1 : 0.87 221.85 13.87 73.95 0.00 213788.92 4660.34 237677.23 00:20:12.843 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.843 Job: Nvme2n1 ended in about 0.88 seconds with error 00:20:12.843 Verification LBA range: start 0x0 length 0x400 00:20:12.843 Nvme2n1 : 0.88 144.95 9.06 72.48 0.00 284889.69 22233.69 273406.48 00:20:12.843 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.843 Job: Nvme3n1 ended in about 0.89 seconds with error 00:20:12.843 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme3n1 : 0.89 144.42 9.03 72.21 0.00 279906.99 42137.22 229910.00 00:20:12.844 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.844 Job: Nvme4n1 ended in about 0.89 seconds with error 00:20:12.844 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme4n1 : 0.89 143.89 8.99 71.95 0.00 274798.11 26214.40 246997.90 00:20:12.844 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.844 Job: Nvme5n1 ended in about 0.90 seconds with error 00:20:12.844 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme5n1 : 0.90 142.15 8.88 71.07 0.00 272394.37 22622.06 236123.78 00:20:12.844 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.844 Job: Nvme6n1 ended in about 0.90 seconds with error 00:20:12.844 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme6n1 : 0.90 141.63 8.85 70.82 0.00 267358.81 23690.05 270299.59 00:20:12.844 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.844 Job: Nvme7n1 ended in about 0.87 seconds with error 00:20:12.844 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme7n1 : 0.87 220.38 13.77 73.46 0.00 187754.10 4563.25 253211.69 00:20:12.844 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.844 Job: Nvme8n1 ended in about 0.88 seconds with error 00:20:12.844 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme8n1 : 0.88 145.74 9.11 72.87 0.00 246857.89 6189.51 299815.06 00:20:12.844 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.844 Job: Nvme9n1 ended in about 0.91 seconds with error 00:20:12.844 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme9n1 : 0.91 140.56 8.78 70.28 0.00 251751.79 23787.14 270299.59 00:20:12.844 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.844 Job: Nvme10n1 ended in about 0.89 seconds with error 00:20:12.844 Verification LBA range: start 0x0 length 0x400 00:20:12.844 Nvme10n1 : 0.89 143.37 8.96 71.68 0.00 239941.15 24175.50 257872.02 00:20:12.844 =================================================================================================================== 00:20:12.844 Total : 1588.94 99.31 720.76 0.00 248745.89 4563.25 299815.06 00:20:12.844 [2024-05-15 04:21:00.711156] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:12.844 [2024-05-15 04:21:00.711249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:12.844 [2024-05-15 04:21:00.711323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc51f0 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.711353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110efb0 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.711373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9730 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.711392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1150d50 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.711409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.711424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.711441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:12.844 [2024-05-15 04:21:00.711605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.844 [2024-05-15 04:21:00.711966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.712155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.712182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110d4f0 with addr=10.0.0.2, port=4420 00:20:12.844 [2024-05-15 04:21:00.712203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d4f0 is same with the state(5) to be set 00:20:12.844 [2024-05-15 04:21:00.712239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.712253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.712267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:12.844 [2024-05-15 04:21:00.712292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.712308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.712321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:12.844 [2024-05-15 04:21:00.712338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.712352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.712365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:12.844 [2024-05-15 04:21:00.712382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.712396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.712409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:12.844 [2024-05-15 04:21:00.712434] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.844 [2024-05-15 04:21:00.712454] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.844 [2024-05-15 04:21:00.712472] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.844 [2024-05-15 04:21:00.712490] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.844 [2024-05-15 04:21:00.712886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.844 [2024-05-15 04:21:00.712911] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.844 [2024-05-15 04:21:00.712940] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.844 [2024-05-15 04:21:00.712954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.844 [2024-05-15 04:21:00.712982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d4f0 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.713049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:12.844 [2024-05-15 04:21:00.713074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:12.844 [2024-05-15 04:21:00.713090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:12.844 [2024-05-15 04:21:00.713125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.713142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.713155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:12.844 [2024-05-15 04:21:00.713195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:12.844 [2024-05-15 04:21:00.713223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:12.844 [2024-05-15 04:21:00.713253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.844 [2024-05-15 04:21:00.713450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.713636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.713662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d300 with addr=10.0.0.2, port=4420 00:20:12.844 [2024-05-15 04:21:00.713678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8d300 is same with the state(5) to be set 00:20:12.844 [2024-05-15 04:21:00.713852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.714065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.714091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa26b0 with addr=10.0.0.2, port=4420 00:20:12.844 [2024-05-15 04:21:00.714108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa26b0 is same with the state(5) to be set 00:20:12.844 [2024-05-15 04:21:00.714267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.714444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.714471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1148d10 with addr=10.0.0.2, port=4420 00:20:12.844 [2024-05-15 04:21:00.714488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148d10 is same with the state(5) to be set 00:20:12.844 [2024-05-15 04:21:00.714682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.714852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.714877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7e7c0 with addr=10.0.0.2, port=4420 00:20:12.844 [2024-05-15 04:21:00.714893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e7c0 is same with the state(5) to be set 00:20:12.844 [2024-05-15 04:21:00.715267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.715430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.844 [2024-05-15 04:21:00.715455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1143210 with addr=10.0.0.2, port=4420 00:20:12.844 [2024-05-15 04:21:00.715471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143210 is same with the state(5) to be set 00:20:12.844 [2024-05-15 04:21:00.715489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8d300 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.715509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa26b0 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.715527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1148d10 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.715571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e7c0 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.715594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1143210 (9): Bad file descriptor 00:20:12.844 [2024-05-15 04:21:00.715611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.715624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.715637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:12.844 [2024-05-15 04:21:00.715654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:12.844 [2024-05-15 04:21:00.715668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:12.844 [2024-05-15 04:21:00.715680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:12.844 [2024-05-15 04:21:00.715695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:12.845 [2024-05-15 04:21:00.715714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:12.845 [2024-05-15 04:21:00.715728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:12.845 [2024-05-15 04:21:00.715767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.845 [2024-05-15 04:21:00.715784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.845 [2024-05-15 04:21:00.715796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.845 [2024-05-15 04:21:00.715809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:12.845 [2024-05-15 04:21:00.715820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:12.845 [2024-05-15 04:21:00.715833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:12.845 [2024-05-15 04:21:00.715849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:12.845 [2024-05-15 04:21:00.715863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:12.845 [2024-05-15 04:21:00.715875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:12.845 [2024-05-15 04:21:00.715927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.845 [2024-05-15 04:21:00.715955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.430 04:21:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:13.430 04:21:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3424996 00:20:14.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3424996) - No such process 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.366 rmmod nvme_tcp 00:20:14.366 rmmod nvme_fabrics 00:20:14.366 rmmod nvme_keyring 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.366 04:21:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.914 04:21:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:16.914 00:20:16.914 real 0m8.195s 00:20:16.914 user 0m20.966s 00:20:16.914 sys 0m1.582s 00:20:16.914 04:21:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:16.914 04:21:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.914 ************************************ 00:20:16.914 END TEST nvmf_shutdown_tc3 00:20:16.914 ************************************ 00:20:16.914 04:21:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:16.914 00:20:16.914 real 0m29.062s 00:20:16.914 user 1m19.986s 00:20:16.914 sys 0m6.959s 00:20:16.914 04:21:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:16.914 04:21:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:16.914 ************************************ 00:20:16.914 END TEST nvmf_shutdown 00:20:16.914 ************************************ 00:20:16.914 04:21:04 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:20:16.914 04:21:04 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.914 04:21:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.914 04:21:04 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:20:16.914 04:21:04 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:16.914 04:21:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.915 04:21:04 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:20:16.915 04:21:04 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:16.915 04:21:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:16.915 04:21:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:16.915 04:21:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.915 ************************************ 00:20:16.915 START TEST nvmf_multicontroller 00:20:16.915 ************************************ 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:16.915 * Looking for test storage... 00:20:16.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.915 04:21:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.471 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.471 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:19.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:19.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:19.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:19.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.472 04:21:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:19.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:20:19.472 00:20:19.472 --- 10.0.0.2 ping statistics --- 00:20:19.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.472 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:20:19.472 00:20:19.472 --- 10.0.0.1 ping statistics --- 00:20:19.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.472 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3427927 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3427927 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3427927 ']' 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.472 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.473 [2024-05-15 04:21:07.079999] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:19.473 [2024-05-15 04:21:07.080094] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.473 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.473 [2024-05-15 04:21:07.163409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.473 [2024-05-15 04:21:07.282111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.473 [2024-05-15 04:21:07.282174] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.473 [2024-05-15 04:21:07.282192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.473 [2024-05-15 04:21:07.282206] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.473 [2024-05-15 04:21:07.282230] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.473 [2024-05-15 04:21:07.282334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.473 [2024-05-15 04:21:07.282442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.473 [2024-05-15 04:21:07.282445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.473 [2024-05-15 04:21:07.424154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.473 Malloc0 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.473 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 [2024-05-15 04:21:07.496333] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:19.731 [2024-05-15 04:21:07.496618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 [2024-05-15 04:21:07.504448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 Malloc1 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3428220 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3428220 /var/tmp/bdevperf.sock 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3428220 ']' 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:19.731 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.990 NVMe0n1 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.990 1 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.990 04:21:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.248 request: 00:20:20.248 { 00:20:20.248 "name": "NVMe0", 00:20:20.248 "trtype": "tcp", 00:20:20.248 "traddr": "10.0.0.2", 00:20:20.248 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:20.248 "hostaddr": "10.0.0.2", 00:20:20.248 "hostsvcid": "60000", 00:20:20.248 "adrfam": "ipv4", 00:20:20.248 "trsvcid": "4420", 00:20:20.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.249 "method": "bdev_nvme_attach_controller", 00:20:20.249 "req_id": 1 00:20:20.249 } 00:20:20.249 Got JSON-RPC error response 00:20:20.249 response: 00:20:20.249 { 00:20:20.249 "code": -114, 00:20:20.249 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:20.249 } 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 request: 00:20:20.249 { 00:20:20.249 "name": "NVMe0", 00:20:20.249 "trtype": "tcp", 00:20:20.249 "traddr": "10.0.0.2", 00:20:20.249 "hostaddr": "10.0.0.2", 00:20:20.249 "hostsvcid": "60000", 00:20:20.249 "adrfam": "ipv4", 00:20:20.249 "trsvcid": "4420", 00:20:20.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.249 "method": "bdev_nvme_attach_controller", 00:20:20.249 "req_id": 1 00:20:20.249 } 00:20:20.249 Got JSON-RPC error response 00:20:20.249 response: 00:20:20.249 { 00:20:20.249 "code": -114, 00:20:20.249 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:20.249 } 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 request: 00:20:20.249 { 00:20:20.249 "name": "NVMe0", 00:20:20.249 "trtype": "tcp", 00:20:20.249 "traddr": "10.0.0.2", 00:20:20.249 "hostaddr": "10.0.0.2", 00:20:20.249 "hostsvcid": "60000", 00:20:20.249 "adrfam": "ipv4", 00:20:20.249 "trsvcid": "4420", 00:20:20.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.249 "multipath": "disable", 00:20:20.249 "method": "bdev_nvme_attach_controller", 00:20:20.249 "req_id": 1 00:20:20.249 } 00:20:20.249 Got JSON-RPC error response 00:20:20.249 response: 00:20:20.249 { 00:20:20.249 "code": -114, 00:20:20.249 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:20.249 } 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 request: 00:20:20.249 { 00:20:20.249 "name": "NVMe0", 00:20:20.249 "trtype": "tcp", 00:20:20.249 "traddr": "10.0.0.2", 00:20:20.249 "hostaddr": "10.0.0.2", 00:20:20.249 "hostsvcid": "60000", 00:20:20.249 "adrfam": "ipv4", 00:20:20.249 "trsvcid": "4420", 00:20:20.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.249 "multipath": "failover", 00:20:20.249 "method": "bdev_nvme_attach_controller", 00:20:20.249 "req_id": 1 00:20:20.249 } 00:20:20.249 Got JSON-RPC error response 00:20:20.249 response: 00:20:20.249 { 00:20:20.249 "code": -114, 00:20:20.249 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:20.249 } 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.249 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:20.250 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.250 04:21:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:20.250 04:21:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.622 0 00:20:21.622 04:21:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:21.622 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.622 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.622 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.622 04:21:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3428220 00:20:21.622 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3428220 ']' 00:20:21.622 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3428220 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3428220 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3428220' 00:20:21.623 killing process with pid 3428220 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3428220 00:20:21.623 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3428220 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:20:21.882 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:21.882 [2024-05-15 04:21:07.602326] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:21.882 [2024-05-15 04:21:07.602417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428220 ] 00:20:21.882 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.882 [2024-05-15 04:21:07.677483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.882 [2024-05-15 04:21:07.789673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.882 [2024-05-15 04:21:08.194884] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name c080d492-5225-4686-addc-06363c5f65fc already exists 00:20:21.882 [2024-05-15 04:21:08.194924] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:c080d492-5225-4686-addc-06363c5f65fc alias for bdev NVMe1n1 00:20:21.882 [2024-05-15 04:21:08.194969] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:21.882 Running I/O for 1 seconds... 00:20:21.882 00:20:21.882 Latency(us) 00:20:21.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.882 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:21.882 NVMe0n1 : 1.01 16247.90 63.47 0.00 0.00 7856.01 5218.61 17670.45 00:20:21.882 =================================================================================================================== 00:20:21.882 Total : 16247.90 63.47 0.00 0.00 7856.01 5218.61 17670.45 00:20:21.882 Received shutdown signal, test time was about 1.000000 seconds 00:20:21.882 00:20:21.882 Latency(us) 00:20:21.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.882 =================================================================================================================== 00:20:21.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.882 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.882 rmmod nvme_tcp 00:20:21.882 rmmod nvme_fabrics 00:20:21.882 rmmod nvme_keyring 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3427927 ']' 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3427927 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3427927 ']' 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3427927 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3427927 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3427927' 00:20:21.882 killing process with pid 3427927 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3427927 00:20:21.882 [2024-05-15 04:21:09.730352] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:21.882 04:21:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3427927 00:20:22.140 04:21:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.140 04:21:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.140 04:21:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.141 04:21:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.141 04:21:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.141 04:21:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.141 04:21:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.141 04:21:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.670 04:21:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.670 00:20:24.670 real 0m7.664s 00:20:24.670 user 0m11.054s 00:20:24.670 sys 0m2.591s 00:20:24.670 04:21:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:24.670 04:21:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:24.670 ************************************ 00:20:24.670 END TEST nvmf_multicontroller 00:20:24.670 ************************************ 00:20:24.670 04:21:12 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:24.670 04:21:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:24.670 04:21:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:24.670 04:21:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.670 ************************************ 00:20:24.670 START TEST nvmf_aer 00:20:24.670 ************************************ 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:24.670 * Looking for test storage... 00:20:24.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:24.670 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.671 04:21:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:27.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:27.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:27.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:27.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:27.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:20:27.216 00:20:27.216 --- 10.0.0.2 ping statistics --- 00:20:27.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.216 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:20:27.216 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:20:27.216 00:20:27.216 --- 10.0.0.1 ping statistics --- 00:20:27.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.216 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3431079 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3431079 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3431079 ']' 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:27.217 04:21:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.217 [2024-05-15 04:21:14.905017] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:27.217 [2024-05-15 04:21:14.905096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.217 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.217 [2024-05-15 04:21:14.986607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:27.217 [2024-05-15 04:21:15.110004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.217 [2024-05-15 04:21:15.110056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.217 [2024-05-15 04:21:15.110071] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.217 [2024-05-15 04:21:15.110082] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.217 [2024-05-15 04:21:15.110092] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.217 [2024-05-15 04:21:15.110154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.217 [2024-05-15 04:21:15.110221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.217 [2024-05-15 04:21:15.110279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.217 [2024-05-15 04:21:15.110281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.149 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:28.149 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:20:28.149 04:21:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.149 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.149 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.150 [2024-05-15 04:21:15.878853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.150 Malloc0 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.150 [2024-05-15 04:21:15.931670] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:28.150 [2024-05-15 04:21:15.932024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.150 [ 00:20:28.150 { 00:20:28.150 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:28.150 "subtype": "Discovery", 00:20:28.150 "listen_addresses": [], 00:20:28.150 "allow_any_host": true, 00:20:28.150 "hosts": [] 00:20:28.150 }, 00:20:28.150 { 00:20:28.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.150 "subtype": "NVMe", 00:20:28.150 "listen_addresses": [ 00:20:28.150 { 00:20:28.150 "trtype": "TCP", 00:20:28.150 "adrfam": "IPv4", 00:20:28.150 "traddr": "10.0.0.2", 00:20:28.150 "trsvcid": "4420" 00:20:28.150 } 00:20:28.150 ], 00:20:28.150 "allow_any_host": true, 00:20:28.150 "hosts": [], 00:20:28.150 "serial_number": "SPDK00000000000001", 00:20:28.150 "model_number": "SPDK bdev Controller", 00:20:28.150 "max_namespaces": 2, 00:20:28.150 "min_cntlid": 1, 00:20:28.150 "max_cntlid": 65519, 00:20:28.150 "namespaces": [ 00:20:28.150 { 00:20:28.150 "nsid": 1, 00:20:28.150 "bdev_name": "Malloc0", 00:20:28.150 "name": "Malloc0", 00:20:28.150 "nguid": "38882ECDA8D348D9A42473597FEC88DE", 00:20:28.150 "uuid": "38882ecd-a8d3-48d9-a424-73597fec88de" 00:20:28.150 } 00:20:28.150 ] 00:20:28.150 } 00:20:28.150 ] 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3431237 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:20:28.150 04:21:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:28.150 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:20:28.150 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.408 Malloc1 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.408 [ 00:20:28.408 { 00:20:28.408 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:28.408 "subtype": "Discovery", 00:20:28.408 "listen_addresses": [], 00:20:28.408 "allow_any_host": true, 00:20:28.408 "hosts": [] 00:20:28.408 }, 00:20:28.408 { 00:20:28.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.408 "subtype": "NVMe", 00:20:28.408 "listen_addresses": [ 00:20:28.408 { 00:20:28.408 "trtype": "TCP", 00:20:28.408 "adrfam": "IPv4", 00:20:28.408 "traddr": "10.0.0.2", 00:20:28.408 "trsvcid": "4420" 00:20:28.408 } 00:20:28.408 ], 00:20:28.408 "allow_any_host": true, 00:20:28.408 "hosts": [], 00:20:28.408 "serial_number": "SPDK00000000000001", 00:20:28.408 "model_number": "SPDK bdev Controller", 00:20:28.408 "max_namespaces": 2, 00:20:28.408 "min_cntlid": 1, 00:20:28.408 "max_cntlid": 65519, 00:20:28.408 "namespaces": [ 00:20:28.408 { 00:20:28.408 "nsid": 1, 00:20:28.408 "bdev_name": "Malloc0", 00:20:28.408 "name": "Malloc0", 00:20:28.408 "nguid": "38882ECDA8D348D9A42473597FEC88DE", 00:20:28.408 "uuid": "38882ecd-a8d3-48d9-a424-73597fec88de" 00:20:28.408 }, 00:20:28.408 { 00:20:28.408 "nsid": 2, 00:20:28.408 "bdev_name": "Malloc1", 00:20:28.408 "name": "Malloc1", 00:20:28.408 "nguid": "A20D02926D90478B852E04C924A4DBAE", 00:20:28.408 "uuid": "a20d0292-6d90-478b-852e-04c924a4dbae" 00:20:28.408 } 00:20:28.408 ] 00:20:28.408 } 00:20:28.408 ] 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3431237 00:20:28.408 Asynchronous Event Request test 00:20:28.408 Attaching to 10.0.0.2 00:20:28.408 Attached to 10.0.0.2 00:20:28.408 Registering asynchronous event callbacks... 00:20:28.408 Starting namespace attribute notice tests for all controllers... 00:20:28.408 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:28.408 aer_cb - Changed Namespace 00:20:28.408 Cleaning up... 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.408 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.666 rmmod nvme_tcp 00:20:28.666 rmmod nvme_fabrics 00:20:28.666 rmmod nvme_keyring 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3431079 ']' 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3431079 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3431079 ']' 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3431079 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3431079 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3431079' 00:20:28.666 killing process with pid 3431079 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3431079 00:20:28.666 [2024-05-15 04:21:16.507662] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:28.666 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3431079 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.924 04:21:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.826 04:21:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.826 00:20:30.826 real 0m6.698s 00:20:30.826 user 0m7.537s 00:20:30.826 sys 0m2.379s 00:20:30.826 04:21:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:30.826 04:21:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:30.826 ************************************ 00:20:30.826 END TEST nvmf_aer 00:20:30.826 ************************************ 00:20:31.084 04:21:18 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:31.084 04:21:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:31.084 04:21:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:31.084 04:21:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.084 ************************************ 00:20:31.084 START TEST nvmf_async_init 00:20:31.084 ************************************ 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:31.084 * Looking for test storage... 00:20:31.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.084 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cfc1bc53ce7c45ffae9243b59e7b20f2 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.085 04:21:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:33.615 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:33.615 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:33.615 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:33.615 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:33.615 00:20:33.615 --- 10.0.0.2 ping statistics --- 00:20:33.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.615 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:20:33.615 00:20:33.615 --- 10.0.0.1 ping statistics --- 00:20:33.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.615 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3433580 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3433580 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3433580 ']' 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:33.615 04:21:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.615 [2024-05-15 04:21:21.566584] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:33.615 [2024-05-15 04:21:21.566674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.615 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.874 [2024-05-15 04:21:21.644557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.874 [2024-05-15 04:21:21.753632] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.874 [2024-05-15 04:21:21.753696] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.874 [2024-05-15 04:21:21.753726] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.875 [2024-05-15 04:21:21.753737] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.875 [2024-05-15 04:21:21.753747] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.875 [2024-05-15 04:21:21.753781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 [2024-05-15 04:21:22.544928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 null0 00:20:34.837 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cfc1bc53ce7c45ffae9243b59e7b20f2 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.838 [2024-05-15 04:21:22.584956] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:34.838 [2024-05-15 04:21:22.585240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.838 nvme0n1 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.838 [ 00:20:34.838 { 00:20:34.838 "name": "nvme0n1", 00:20:34.838 "aliases": [ 00:20:34.838 "cfc1bc53-ce7c-45ff-ae92-43b59e7b20f2" 00:20:34.838 ], 00:20:34.838 "product_name": "NVMe disk", 00:20:34.838 "block_size": 512, 00:20:34.838 "num_blocks": 2097152, 00:20:34.838 "uuid": "cfc1bc53-ce7c-45ff-ae92-43b59e7b20f2", 00:20:34.838 "assigned_rate_limits": { 00:20:34.838 "rw_ios_per_sec": 0, 00:20:34.838 "rw_mbytes_per_sec": 0, 00:20:34.838 "r_mbytes_per_sec": 0, 00:20:34.838 "w_mbytes_per_sec": 0 00:20:34.838 }, 00:20:34.838 "claimed": false, 00:20:34.838 "zoned": false, 00:20:34.838 "supported_io_types": { 00:20:34.838 "read": true, 00:20:34.838 "write": true, 00:20:34.838 "unmap": false, 00:20:34.838 "write_zeroes": true, 00:20:34.838 "flush": true, 00:20:34.838 "reset": true, 00:20:34.838 "compare": true, 00:20:34.838 "compare_and_write": true, 00:20:34.838 "abort": true, 00:20:34.838 "nvme_admin": true, 00:20:34.838 "nvme_io": true 00:20:34.838 }, 00:20:34.838 "memory_domains": [ 00:20:34.838 { 00:20:34.838 "dma_device_id": "system", 00:20:34.838 "dma_device_type": 1 00:20:34.838 } 00:20:34.838 ], 00:20:34.838 "driver_specific": { 00:20:34.838 "nvme": [ 00:20:34.838 { 00:20:34.838 "trid": { 00:20:34.838 "trtype": "TCP", 00:20:34.838 "adrfam": "IPv4", 00:20:34.838 "traddr": "10.0.0.2", 00:20:34.838 "trsvcid": "4420", 00:20:34.838 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:34.838 }, 00:20:34.838 "ctrlr_data": { 00:20:34.838 "cntlid": 1, 00:20:34.838 "vendor_id": "0x8086", 00:20:34.838 "model_number": "SPDK bdev Controller", 00:20:34.838 "serial_number": "00000000000000000000", 00:20:34.838 "firmware_revision": "24.05", 00:20:34.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.838 "oacs": { 00:20:34.838 "security": 0, 00:20:34.838 "format": 0, 00:20:34.838 "firmware": 0, 00:20:34.838 "ns_manage": 0 00:20:34.838 }, 00:20:34.838 "multi_ctrlr": true, 00:20:34.838 "ana_reporting": false 00:20:34.838 }, 00:20:34.838 "vs": { 00:20:34.838 "nvme_version": "1.3" 00:20:34.838 }, 00:20:34.838 "ns_data": { 00:20:34.838 "id": 1, 00:20:34.838 "can_share": true 00:20:34.838 } 00:20:34.838 } 00:20:34.838 ], 00:20:34.838 "mp_policy": "active_passive" 00:20:34.838 } 00:20:34.838 } 00:20:34.838 ] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.838 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.838 [2024-05-15 04:21:22.833733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:34.838 [2024-05-15 04:21:22.833826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d92be0 (9): Bad file descriptor 00:20:35.096 [2024-05-15 04:21:22.966063] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.096 [ 00:20:35.096 { 00:20:35.096 "name": "nvme0n1", 00:20:35.096 "aliases": [ 00:20:35.096 "cfc1bc53-ce7c-45ff-ae92-43b59e7b20f2" 00:20:35.096 ], 00:20:35.096 "product_name": "NVMe disk", 00:20:35.096 "block_size": 512, 00:20:35.096 "num_blocks": 2097152, 00:20:35.096 "uuid": "cfc1bc53-ce7c-45ff-ae92-43b59e7b20f2", 00:20:35.096 "assigned_rate_limits": { 00:20:35.096 "rw_ios_per_sec": 0, 00:20:35.096 "rw_mbytes_per_sec": 0, 00:20:35.096 "r_mbytes_per_sec": 0, 00:20:35.096 "w_mbytes_per_sec": 0 00:20:35.096 }, 00:20:35.096 "claimed": false, 00:20:35.096 "zoned": false, 00:20:35.096 "supported_io_types": { 00:20:35.096 "read": true, 00:20:35.096 "write": true, 00:20:35.096 "unmap": false, 00:20:35.096 "write_zeroes": true, 00:20:35.096 "flush": true, 00:20:35.096 "reset": true, 00:20:35.096 "compare": true, 00:20:35.096 "compare_and_write": true, 00:20:35.096 "abort": true, 00:20:35.096 "nvme_admin": true, 00:20:35.096 "nvme_io": true 00:20:35.096 }, 00:20:35.096 "memory_domains": [ 00:20:35.096 { 00:20:35.096 "dma_device_id": "system", 00:20:35.096 "dma_device_type": 1 00:20:35.096 } 00:20:35.096 ], 00:20:35.096 "driver_specific": { 00:20:35.096 "nvme": [ 00:20:35.096 { 00:20:35.096 "trid": { 00:20:35.096 "trtype": "TCP", 00:20:35.096 "adrfam": "IPv4", 00:20:35.096 "traddr": "10.0.0.2", 00:20:35.096 "trsvcid": "4420", 00:20:35.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:35.096 }, 00:20:35.096 "ctrlr_data": { 00:20:35.096 "cntlid": 2, 00:20:35.096 "vendor_id": "0x8086", 00:20:35.096 "model_number": "SPDK bdev Controller", 00:20:35.096 "serial_number": "00000000000000000000", 00:20:35.096 "firmware_revision": "24.05", 00:20:35.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:35.096 "oacs": { 00:20:35.096 "security": 0, 00:20:35.096 "format": 0, 00:20:35.096 "firmware": 0, 00:20:35.096 "ns_manage": 0 00:20:35.096 }, 00:20:35.096 "multi_ctrlr": true, 00:20:35.096 "ana_reporting": false 00:20:35.096 }, 00:20:35.096 "vs": { 00:20:35.096 "nvme_version": "1.3" 00:20:35.096 }, 00:20:35.096 "ns_data": { 00:20:35.096 "id": 1, 00:20:35.096 "can_share": true 00:20:35.096 } 00:20:35.096 } 00:20:35.096 ], 00:20:35.096 "mp_policy": "active_passive" 00:20:35.096 } 00:20:35.096 } 00:20:35.096 ] 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0uJM72iecO 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:35.096 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0uJM72iecO 00:20:35.097 04:21:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.097 [2024-05-15 04:21:23.014369] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.097 [2024-05-15 04:21:23.014555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0uJM72iecO 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.097 [2024-05-15 04:21:23.022371] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0uJM72iecO 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.097 [2024-05-15 04:21:23.030382] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.097 [2024-05-15 04:21:23.030464] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.097 nvme0n1 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.097 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.097 [ 00:20:35.097 { 00:20:35.097 "name": "nvme0n1", 00:20:35.354 "aliases": [ 00:20:35.354 "cfc1bc53-ce7c-45ff-ae92-43b59e7b20f2" 00:20:35.354 ], 00:20:35.354 "product_name": "NVMe disk", 00:20:35.354 "block_size": 512, 00:20:35.354 "num_blocks": 2097152, 00:20:35.354 "uuid": "cfc1bc53-ce7c-45ff-ae92-43b59e7b20f2", 00:20:35.354 "assigned_rate_limits": { 00:20:35.354 "rw_ios_per_sec": 0, 00:20:35.354 "rw_mbytes_per_sec": 0, 00:20:35.354 "r_mbytes_per_sec": 0, 00:20:35.354 "w_mbytes_per_sec": 0 00:20:35.354 }, 00:20:35.354 "claimed": false, 00:20:35.354 "zoned": false, 00:20:35.354 "supported_io_types": { 00:20:35.354 "read": true, 00:20:35.354 "write": true, 00:20:35.354 "unmap": false, 00:20:35.354 "write_zeroes": true, 00:20:35.354 "flush": true, 00:20:35.354 "reset": true, 00:20:35.354 "compare": true, 00:20:35.354 "compare_and_write": true, 00:20:35.354 "abort": true, 00:20:35.354 "nvme_admin": true, 00:20:35.354 "nvme_io": true 00:20:35.354 }, 00:20:35.354 "memory_domains": [ 00:20:35.354 { 00:20:35.354 "dma_device_id": "system", 00:20:35.354 "dma_device_type": 1 00:20:35.354 } 00:20:35.354 ], 00:20:35.354 "driver_specific": { 00:20:35.354 "nvme": [ 00:20:35.354 { 00:20:35.354 "trid": { 00:20:35.354 "trtype": "TCP", 00:20:35.354 "adrfam": "IPv4", 00:20:35.354 "traddr": "10.0.0.2", 00:20:35.354 "trsvcid": "4421", 00:20:35.354 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:35.354 }, 00:20:35.354 "ctrlr_data": { 00:20:35.354 "cntlid": 3, 00:20:35.354 "vendor_id": "0x8086", 00:20:35.354 "model_number": "SPDK bdev Controller", 00:20:35.354 "serial_number": "00000000000000000000", 00:20:35.354 "firmware_revision": "24.05", 00:20:35.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:35.354 "oacs": { 00:20:35.354 "security": 0, 00:20:35.354 "format": 0, 00:20:35.354 "firmware": 0, 00:20:35.354 "ns_manage": 0 00:20:35.354 }, 00:20:35.354 "multi_ctrlr": true, 00:20:35.354 "ana_reporting": false 00:20:35.354 }, 00:20:35.354 "vs": { 00:20:35.354 "nvme_version": "1.3" 00:20:35.354 }, 00:20:35.354 "ns_data": { 00:20:35.354 "id": 1, 00:20:35.354 "can_share": true 00:20:35.354 } 00:20:35.354 } 00:20:35.354 ], 00:20:35.354 "mp_policy": "active_passive" 00:20:35.354 } 00:20:35.354 } 00:20:35.354 ] 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0uJM72iecO 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.355 rmmod nvme_tcp 00:20:35.355 rmmod nvme_fabrics 00:20:35.355 rmmod nvme_keyring 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3433580 ']' 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3433580 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3433580 ']' 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3433580 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3433580 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3433580' 00:20:35.355 killing process with pid 3433580 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3433580 00:20:35.355 [2024-05-15 04:21:23.212074] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.355 [2024-05-15 04:21:23.212106] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:35.355 [2024-05-15 04:21:23.212135] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:35.355 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3433580 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.611 04:21:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.509 04:21:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.509 00:20:37.509 real 0m6.628s 00:20:37.509 user 0m3.121s 00:20:37.509 sys 0m2.131s 00:20:37.509 04:21:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:37.509 04:21:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:37.509 ************************************ 00:20:37.509 END TEST nvmf_async_init 00:20:37.509 ************************************ 00:20:37.766 04:21:25 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:37.766 04:21:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:37.766 04:21:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:37.766 04:21:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.766 ************************************ 00:20:37.766 START TEST dma 00:20:37.766 ************************************ 00:20:37.766 04:21:25 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:37.766 * Looking for test storage... 00:20:37.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:37.766 04:21:25 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.766 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.766 04:21:25 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.766 04:21:25 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.767 04:21:25 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.767 04:21:25 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:37.767 04:21:25 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.767 04:21:25 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.767 04:21:25 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:37.767 04:21:25 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:37.767 00:20:37.767 real 0m0.069s 00:20:37.767 user 0m0.033s 00:20:37.767 sys 0m0.041s 00:20:37.767 04:21:25 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:37.767 04:21:25 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:37.767 ************************************ 00:20:37.767 END TEST dma 00:20:37.767 ************************************ 00:20:37.767 04:21:25 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:37.767 04:21:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:37.767 04:21:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:37.767 04:21:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.767 ************************************ 00:20:37.767 START TEST nvmf_identify 00:20:37.767 ************************************ 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:37.767 * Looking for test storage... 00:20:37.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.767 04:21:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:40.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.292 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:40.293 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:40.293 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:40.293 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:40.293 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:40.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:20:40.551 00:20:40.551 --- 10.0.0.2 ping statistics --- 00:20:40.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.551 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:20:40.551 00:20:40.551 --- 10.0.0.1 ping statistics --- 00:20:40.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.551 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3436129 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3436129 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3436129 ']' 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:40.551 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.552 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:40.552 04:21:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.552 [2024-05-15 04:21:28.392131] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:40.552 [2024-05-15 04:21:28.392215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.552 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.552 [2024-05-15 04:21:28.466814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.809 [2024-05-15 04:21:28.575993] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.809 [2024-05-15 04:21:28.576037] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.809 [2024-05-15 04:21:28.576066] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.809 [2024-05-15 04:21:28.576077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.810 [2024-05-15 04:21:28.576088] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.810 [2024-05-15 04:21:28.576139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.810 [2024-05-15 04:21:28.576197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.810 [2024-05-15 04:21:28.576259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.810 [2024-05-15 04:21:28.576262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.374 [2024-05-15 04:21:29.368968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.374 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.633 Malloc0 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.633 [2024-05-15 04:21:29.448301] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:41.633 [2024-05-15 04:21:29.448613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.633 [ 00:20:41.633 { 00:20:41.633 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:41.633 "subtype": "Discovery", 00:20:41.633 "listen_addresses": [ 00:20:41.633 { 00:20:41.633 "trtype": "TCP", 00:20:41.633 "adrfam": "IPv4", 00:20:41.633 "traddr": "10.0.0.2", 00:20:41.633 "trsvcid": "4420" 00:20:41.633 } 00:20:41.633 ], 00:20:41.633 "allow_any_host": true, 00:20:41.633 "hosts": [] 00:20:41.633 }, 00:20:41.633 { 00:20:41.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.633 "subtype": "NVMe", 00:20:41.633 "listen_addresses": [ 00:20:41.633 { 00:20:41.633 "trtype": "TCP", 00:20:41.633 "adrfam": "IPv4", 00:20:41.633 "traddr": "10.0.0.2", 00:20:41.633 "trsvcid": "4420" 00:20:41.633 } 00:20:41.633 ], 00:20:41.633 "allow_any_host": true, 00:20:41.633 "hosts": [], 00:20:41.633 "serial_number": "SPDK00000000000001", 00:20:41.633 "model_number": "SPDK bdev Controller", 00:20:41.633 "max_namespaces": 32, 00:20:41.633 "min_cntlid": 1, 00:20:41.633 "max_cntlid": 65519, 00:20:41.633 "namespaces": [ 00:20:41.633 { 00:20:41.633 "nsid": 1, 00:20:41.633 "bdev_name": "Malloc0", 00:20:41.633 "name": "Malloc0", 00:20:41.633 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:41.633 "eui64": "ABCDEF0123456789", 00:20:41.633 "uuid": "5edebdb3-be86-4bb4-b9c8-8de589d5a4ea" 00:20:41.633 } 00:20:41.633 ] 00:20:41.633 } 00:20:41.633 ] 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.633 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:41.633 [2024-05-15 04:21:29.488861] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:41.633 [2024-05-15 04:21:29.488909] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436278 ] 00:20:41.633 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.633 [2024-05-15 04:21:29.524321] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:41.633 [2024-05-15 04:21:29.524384] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:41.633 [2024-05-15 04:21:29.524394] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:41.633 [2024-05-15 04:21:29.524412] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:41.633 [2024-05-15 04:21:29.524426] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:41.633 [2024-05-15 04:21:29.524821] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:41.633 [2024-05-15 04:21:29.524877] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc26c80 0 00:20:41.633 [2024-05-15 04:21:29.530946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:41.633 [2024-05-15 04:21:29.530975] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:41.633 [2024-05-15 04:21:29.530990] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:41.633 [2024-05-15 04:21:29.530997] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:41.633 [2024-05-15 04:21:29.531055] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.633 [2024-05-15 04:21:29.531068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.633 [2024-05-15 04:21:29.531076] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.633 [2024-05-15 04:21:29.531097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:41.633 [2024-05-15 04:21:29.531125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.633 [2024-05-15 04:21:29.531962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.633 [2024-05-15 04:21:29.531980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.633 [2024-05-15 04:21:29.531987] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.633 [2024-05-15 04:21:29.531995] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.633 [2024-05-15 04:21:29.532016] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:41.633 [2024-05-15 04:21:29.532028] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:41.634 [2024-05-15 04:21:29.532038] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:41.634 [2024-05-15 04:21:29.532058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532067] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532073] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.532085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.634 [2024-05-15 04:21:29.532108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.532303] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.532318] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.532326] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532332] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.532342] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:41.634 [2024-05-15 04:21:29.532355] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:41.634 [2024-05-15 04:21:29.532367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.532392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.634 [2024-05-15 04:21:29.532413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.532612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.532627] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.532634] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532641] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.532650] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:41.634 [2024-05-15 04:21:29.532669] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:41.634 [2024-05-15 04:21:29.532682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532696] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.532707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.634 [2024-05-15 04:21:29.532728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.532917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.532941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.532950] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532957] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.532966] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:41.634 [2024-05-15 04:21:29.532988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.532997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533003] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.533014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.634 [2024-05-15 04:21:29.533035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.533227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.533239] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.533246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.533262] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:41.634 [2024-05-15 04:21:29.533271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:41.634 [2024-05-15 04:21:29.533283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:41.634 [2024-05-15 04:21:29.533394] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:41.634 [2024-05-15 04:21:29.533403] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:41.634 [2024-05-15 04:21:29.533419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533433] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.533443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.634 [2024-05-15 04:21:29.533464] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.533657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.533673] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.533679] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.533700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:41.634 [2024-05-15 04:21:29.533717] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533732] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.533742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.634 [2024-05-15 04:21:29.533764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.533962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.533978] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.533985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.533992] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.534000] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:41.634 [2024-05-15 04:21:29.534008] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:41.634 [2024-05-15 04:21:29.534022] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:41.634 [2024-05-15 04:21:29.534041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:41.634 [2024-05-15 04:21:29.534056] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.534064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.534075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.634 [2024-05-15 04:21:29.534097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.534337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.634 [2024-05-15 04:21:29.534349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.634 [2024-05-15 04:21:29.534357] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.534364] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc26c80): datao=0, datal=4096, cccid=0 00:20:41.634 [2024-05-15 04:21:29.534372] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc85e40) on tqpair(0xc26c80): expected_datao=0, payload_size=4096 00:20:41.634 [2024-05-15 04:21:29.534379] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.534426] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.534437] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575118] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.575137] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.575144] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575151] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.575164] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:41.634 [2024-05-15 04:21:29.575173] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:41.634 [2024-05-15 04:21:29.575187] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:41.634 [2024-05-15 04:21:29.575197] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:41.634 [2024-05-15 04:21:29.575205] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:41.634 [2024-05-15 04:21:29.575213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:41.634 [2024-05-15 04:21:29.575233] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:41.634 [2024-05-15 04:21:29.575250] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575259] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575266] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.575278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.634 [2024-05-15 04:21:29.575301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.634 [2024-05-15 04:21:29.575517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.634 [2024-05-15 04:21:29.575532] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.634 [2024-05-15 04:21:29.575539] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575546] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc85e40) on tqpair=0xc26c80 00:20:41.634 [2024-05-15 04:21:29.575564] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575573] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575579] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.575590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.634 [2024-05-15 04:21:29.575600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575607] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.634 [2024-05-15 04:21:29.575613] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc26c80) 00:20:41.634 [2024-05-15 04:21:29.575622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.635 [2024-05-15 04:21:29.575631] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.575638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.575644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc26c80) 00:20:41.635 [2024-05-15 04:21:29.575653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.635 [2024-05-15 04:21:29.575663] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.575684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.575690] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc26c80) 00:20:41.635 [2024-05-15 04:21:29.575699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.635 [2024-05-15 04:21:29.575707] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:41.635 [2024-05-15 04:21:29.575723] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:41.635 [2024-05-15 04:21:29.575737] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.575745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc26c80) 00:20:41.635 [2024-05-15 04:21:29.575755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.635 [2024-05-15 04:21:29.575778] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85e40, cid 0, qid 0 00:20:41.635 [2024-05-15 04:21:29.575804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc85fa0, cid 1, qid 0 00:20:41.635 [2024-05-15 04:21:29.575811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86100, cid 2, qid 0 00:20:41.635 [2024-05-15 04:21:29.575819] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86260, cid 3, qid 0 00:20:41.635 [2024-05-15 04:21:29.575826] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc863c0, cid 4, qid 0 00:20:41.635 [2024-05-15 04:21:29.576049] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.635 [2024-05-15 04:21:29.576065] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.635 [2024-05-15 04:21:29.576072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc863c0) on tqpair=0xc26c80 00:20:41.635 [2024-05-15 04:21:29.576095] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:41.635 [2024-05-15 04:21:29.576105] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:41.635 [2024-05-15 04:21:29.576124] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576134] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc26c80) 00:20:41.635 [2024-05-15 04:21:29.576145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.635 [2024-05-15 04:21:29.576166] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc863c0, cid 4, qid 0 00:20:41.635 [2024-05-15 04:21:29.576386] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.635 [2024-05-15 04:21:29.576401] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.635 [2024-05-15 04:21:29.576408] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576415] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc26c80): datao=0, datal=4096, cccid=4 00:20:41.635 [2024-05-15 04:21:29.576422] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc863c0) on tqpair(0xc26c80): expected_datao=0, payload_size=4096 00:20:41.635 [2024-05-15 04:21:29.576430] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576440] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576447] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576540] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.635 [2024-05-15 04:21:29.576551] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.635 [2024-05-15 04:21:29.576557] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576564] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc863c0) on tqpair=0xc26c80 00:20:41.635 [2024-05-15 04:21:29.576586] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:41.635 [2024-05-15 04:21:29.576629] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576640] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc26c80) 00:20:41.635 [2024-05-15 04:21:29.576652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.635 [2024-05-15 04:21:29.576667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576675] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.576697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc26c80) 00:20:41.635 [2024-05-15 04:21:29.576707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.635 [2024-05-15 04:21:29.576737] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc863c0, cid 4, qid 0 00:20:41.635 [2024-05-15 04:21:29.576763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86520, cid 5, qid 0 00:20:41.635 [2024-05-15 04:21:29.577035] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.635 [2024-05-15 04:21:29.577052] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.635 [2024-05-15 04:21:29.577059] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.577065] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc26c80): datao=0, datal=1024, cccid=4 00:20:41.635 [2024-05-15 04:21:29.577073] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc863c0) on tqpair(0xc26c80): expected_datao=0, payload_size=1024 00:20:41.635 [2024-05-15 04:21:29.577080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.577090] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.577097] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.577106] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.635 [2024-05-15 04:21:29.577115] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.635 [2024-05-15 04:21:29.577121] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.577128] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc86520) on tqpair=0xc26c80 00:20:41.635 [2024-05-15 04:21:29.618149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.635 [2024-05-15 04:21:29.618168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.635 [2024-05-15 04:21:29.618176] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.618183] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc863c0) on tqpair=0xc26c80 00:20:41.635 [2024-05-15 04:21:29.618204] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.618213] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc26c80) 00:20:41.635 [2024-05-15 04:21:29.618225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.635 [2024-05-15 04:21:29.618254] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc863c0, cid 4, qid 0 00:20:41.635 [2024-05-15 04:21:29.618468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.635 [2024-05-15 04:21:29.618484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.635 [2024-05-15 04:21:29.618491] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.618498] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc26c80): datao=0, datal=3072, cccid=4 00:20:41.635 [2024-05-15 04:21:29.618505] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc863c0) on tqpair(0xc26c80): expected_datao=0, payload_size=3072 00:20:41.635 [2024-05-15 04:21:29.618513] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.618561] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.635 [2024-05-15 04:21:29.618571] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.659113] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.897 [2024-05-15 04:21:29.659132] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.897 [2024-05-15 04:21:29.659140] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.659151] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc863c0) on tqpair=0xc26c80 00:20:41.897 [2024-05-15 04:21:29.659169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.659178] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc26c80) 00:20:41.897 [2024-05-15 04:21:29.659190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.897 [2024-05-15 04:21:29.659219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc863c0, cid 4, qid 0 00:20:41.897 [2024-05-15 04:21:29.659399] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.897 [2024-05-15 04:21:29.659411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.897 [2024-05-15 04:21:29.659418] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.659424] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc26c80): datao=0, datal=8, cccid=4 00:20:41.897 [2024-05-15 04:21:29.659432] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc863c0) on tqpair(0xc26c80): expected_datao=0, payload_size=8 00:20:41.897 [2024-05-15 04:21:29.659439] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.659449] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.659457] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.700111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.897 [2024-05-15 04:21:29.700130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.897 [2024-05-15 04:21:29.700138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.897 [2024-05-15 04:21:29.700145] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc863c0) on tqpair=0xc26c80 00:20:41.897 ===================================================== 00:20:41.897 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:41.897 ===================================================== 00:20:41.897 Controller Capabilities/Features 00:20:41.897 ================================ 00:20:41.897 Vendor ID: 0000 00:20:41.897 Subsystem Vendor ID: 0000 00:20:41.897 Serial Number: .................... 00:20:41.897 Model Number: ........................................ 00:20:41.897 Firmware Version: 24.05 00:20:41.897 Recommended Arb Burst: 0 00:20:41.897 IEEE OUI Identifier: 00 00 00 00:20:41.897 Multi-path I/O 00:20:41.897 May have multiple subsystem ports: No 00:20:41.897 May have multiple controllers: No 00:20:41.897 Associated with SR-IOV VF: No 00:20:41.897 Max Data Transfer Size: 131072 00:20:41.897 Max Number of Namespaces: 0 00:20:41.897 Max Number of I/O Queues: 1024 00:20:41.897 NVMe Specification Version (VS): 1.3 00:20:41.897 NVMe Specification Version (Identify): 1.3 00:20:41.897 Maximum Queue Entries: 128 00:20:41.897 Contiguous Queues Required: Yes 00:20:41.897 Arbitration Mechanisms Supported 00:20:41.897 Weighted Round Robin: Not Supported 00:20:41.897 Vendor Specific: Not Supported 00:20:41.897 Reset Timeout: 15000 ms 00:20:41.897 Doorbell Stride: 4 bytes 00:20:41.897 NVM Subsystem Reset: Not Supported 00:20:41.897 Command Sets Supported 00:20:41.897 NVM Command Set: Supported 00:20:41.897 Boot Partition: Not Supported 00:20:41.897 Memory Page Size Minimum: 4096 bytes 00:20:41.897 Memory Page Size Maximum: 4096 bytes 00:20:41.897 Persistent Memory Region: Not Supported 00:20:41.897 Optional Asynchronous Events Supported 00:20:41.897 Namespace Attribute Notices: Not Supported 00:20:41.897 Firmware Activation Notices: Not Supported 00:20:41.897 ANA Change Notices: Not Supported 00:20:41.897 PLE Aggregate Log Change Notices: Not Supported 00:20:41.897 LBA Status Info Alert Notices: Not Supported 00:20:41.897 EGE Aggregate Log Change Notices: Not Supported 00:20:41.897 Normal NVM Subsystem Shutdown event: Not Supported 00:20:41.897 Zone Descriptor Change Notices: Not Supported 00:20:41.897 Discovery Log Change Notices: Supported 00:20:41.897 Controller Attributes 00:20:41.897 128-bit Host Identifier: Not Supported 00:20:41.897 Non-Operational Permissive Mode: Not Supported 00:20:41.897 NVM Sets: Not Supported 00:20:41.897 Read Recovery Levels: Not Supported 00:20:41.897 Endurance Groups: Not Supported 00:20:41.897 Predictable Latency Mode: Not Supported 00:20:41.897 Traffic Based Keep ALive: Not Supported 00:20:41.897 Namespace Granularity: Not Supported 00:20:41.897 SQ Associations: Not Supported 00:20:41.897 UUID List: Not Supported 00:20:41.897 Multi-Domain Subsystem: Not Supported 00:20:41.897 Fixed Capacity Management: Not Supported 00:20:41.897 Variable Capacity Management: Not Supported 00:20:41.897 Delete Endurance Group: Not Supported 00:20:41.897 Delete NVM Set: Not Supported 00:20:41.897 Extended LBA Formats Supported: Not Supported 00:20:41.897 Flexible Data Placement Supported: Not Supported 00:20:41.897 00:20:41.897 Controller Memory Buffer Support 00:20:41.897 ================================ 00:20:41.897 Supported: No 00:20:41.897 00:20:41.897 Persistent Memory Region Support 00:20:41.897 ================================ 00:20:41.897 Supported: No 00:20:41.897 00:20:41.897 Admin Command Set Attributes 00:20:41.897 ============================ 00:20:41.897 Security Send/Receive: Not Supported 00:20:41.897 Format NVM: Not Supported 00:20:41.897 Firmware Activate/Download: Not Supported 00:20:41.897 Namespace Management: Not Supported 00:20:41.897 Device Self-Test: Not Supported 00:20:41.897 Directives: Not Supported 00:20:41.897 NVMe-MI: Not Supported 00:20:41.897 Virtualization Management: Not Supported 00:20:41.897 Doorbell Buffer Config: Not Supported 00:20:41.897 Get LBA Status Capability: Not Supported 00:20:41.897 Command & Feature Lockdown Capability: Not Supported 00:20:41.897 Abort Command Limit: 1 00:20:41.897 Async Event Request Limit: 4 00:20:41.897 Number of Firmware Slots: N/A 00:20:41.897 Firmware Slot 1 Read-Only: N/A 00:20:41.897 Firmware Activation Without Reset: N/A 00:20:41.897 Multiple Update Detection Support: N/A 00:20:41.897 Firmware Update Granularity: No Information Provided 00:20:41.897 Per-Namespace SMART Log: No 00:20:41.897 Asymmetric Namespace Access Log Page: Not Supported 00:20:41.897 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:41.897 Command Effects Log Page: Not Supported 00:20:41.898 Get Log Page Extended Data: Supported 00:20:41.898 Telemetry Log Pages: Not Supported 00:20:41.898 Persistent Event Log Pages: Not Supported 00:20:41.898 Supported Log Pages Log Page: May Support 00:20:41.898 Commands Supported & Effects Log Page: Not Supported 00:20:41.898 Feature Identifiers & Effects Log Page:May Support 00:20:41.898 NVMe-MI Commands & Effects Log Page: May Support 00:20:41.898 Data Area 4 for Telemetry Log: Not Supported 00:20:41.898 Error Log Page Entries Supported: 128 00:20:41.898 Keep Alive: Not Supported 00:20:41.898 00:20:41.898 NVM Command Set Attributes 00:20:41.898 ========================== 00:20:41.898 Submission Queue Entry Size 00:20:41.898 Max: 1 00:20:41.898 Min: 1 00:20:41.898 Completion Queue Entry Size 00:20:41.898 Max: 1 00:20:41.898 Min: 1 00:20:41.898 Number of Namespaces: 0 00:20:41.898 Compare Command: Not Supported 00:20:41.898 Write Uncorrectable Command: Not Supported 00:20:41.898 Dataset Management Command: Not Supported 00:20:41.898 Write Zeroes Command: Not Supported 00:20:41.898 Set Features Save Field: Not Supported 00:20:41.898 Reservations: Not Supported 00:20:41.898 Timestamp: Not Supported 00:20:41.898 Copy: Not Supported 00:20:41.898 Volatile Write Cache: Not Present 00:20:41.898 Atomic Write Unit (Normal): 1 00:20:41.898 Atomic Write Unit (PFail): 1 00:20:41.898 Atomic Compare & Write Unit: 1 00:20:41.898 Fused Compare & Write: Supported 00:20:41.898 Scatter-Gather List 00:20:41.898 SGL Command Set: Supported 00:20:41.898 SGL Keyed: Supported 00:20:41.898 SGL Bit Bucket Descriptor: Not Supported 00:20:41.898 SGL Metadata Pointer: Not Supported 00:20:41.898 Oversized SGL: Not Supported 00:20:41.898 SGL Metadata Address: Not Supported 00:20:41.898 SGL Offset: Supported 00:20:41.898 Transport SGL Data Block: Not Supported 00:20:41.898 Replay Protected Memory Block: Not Supported 00:20:41.898 00:20:41.898 Firmware Slot Information 00:20:41.898 ========================= 00:20:41.898 Active slot: 0 00:20:41.898 00:20:41.898 00:20:41.898 Error Log 00:20:41.898 ========= 00:20:41.898 00:20:41.898 Active Namespaces 00:20:41.898 ================= 00:20:41.898 Discovery Log Page 00:20:41.898 ================== 00:20:41.898 Generation Counter: 2 00:20:41.898 Number of Records: 2 00:20:41.898 Record Format: 0 00:20:41.898 00:20:41.898 Discovery Log Entry 0 00:20:41.898 ---------------------- 00:20:41.898 Transport Type: 3 (TCP) 00:20:41.898 Address Family: 1 (IPv4) 00:20:41.898 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:41.898 Entry Flags: 00:20:41.898 Duplicate Returned Information: 1 00:20:41.898 Explicit Persistent Connection Support for Discovery: 1 00:20:41.898 Transport Requirements: 00:20:41.898 Secure Channel: Not Required 00:20:41.898 Port ID: 0 (0x0000) 00:20:41.898 Controller ID: 65535 (0xffff) 00:20:41.898 Admin Max SQ Size: 128 00:20:41.898 Transport Service Identifier: 4420 00:20:41.898 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:41.898 Transport Address: 10.0.0.2 00:20:41.898 Discovery Log Entry 1 00:20:41.898 ---------------------- 00:20:41.898 Transport Type: 3 (TCP) 00:20:41.898 Address Family: 1 (IPv4) 00:20:41.898 Subsystem Type: 2 (NVM Subsystem) 00:20:41.898 Entry Flags: 00:20:41.898 Duplicate Returned Information: 0 00:20:41.898 Explicit Persistent Connection Support for Discovery: 0 00:20:41.898 Transport Requirements: 00:20:41.898 Secure Channel: Not Required 00:20:41.898 Port ID: 0 (0x0000) 00:20:41.898 Controller ID: 65535 (0xffff) 00:20:41.898 Admin Max SQ Size: 128 00:20:41.898 Transport Service Identifier: 4420 00:20:41.898 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:41.898 Transport Address: 10.0.0.2 [2024-05-15 04:21:29.700267] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:41.898 [2024-05-15 04:21:29.700295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.898 [2024-05-15 04:21:29.700308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.898 [2024-05-15 04:21:29.700317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.898 [2024-05-15 04:21:29.700327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.898 [2024-05-15 04:21:29.700342] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.700350] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.700357] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc26c80) 00:20:41.898 [2024-05-15 04:21:29.700368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.898 [2024-05-15 04:21:29.700409] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86260, cid 3, qid 0 00:20:41.898 [2024-05-15 04:21:29.700675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.898 [2024-05-15 04:21:29.700688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.898 [2024-05-15 04:21:29.700695] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.700702] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc86260) on tqpair=0xc26c80 00:20:41.898 [2024-05-15 04:21:29.700716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.700724] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.700731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc26c80) 00:20:41.898 [2024-05-15 04:21:29.700741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.898 [2024-05-15 04:21:29.700771] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86260, cid 3, qid 0 00:20:41.898 [2024-05-15 04:21:29.703940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.898 [2024-05-15 04:21:29.703958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.898 [2024-05-15 04:21:29.703965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.703971] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc86260) on tqpair=0xc26c80 00:20:41.898 [2024-05-15 04:21:29.703982] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:41.898 [2024-05-15 04:21:29.703991] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:41.898 [2024-05-15 04:21:29.704008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.704017] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.704023] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc26c80) 00:20:41.898 [2024-05-15 04:21:29.704034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.898 [2024-05-15 04:21:29.704056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86260, cid 3, qid 0 00:20:41.898 [2024-05-15 04:21:29.704284] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.898 [2024-05-15 04:21:29.704296] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.898 [2024-05-15 04:21:29.704303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.704310] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc86260) on tqpair=0xc26c80 00:20:41.898 [2024-05-15 04:21:29.704325] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:20:41.898 00:20:41.898 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:41.898 [2024-05-15 04:21:29.738742] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:41.898 [2024-05-15 04:21:29.738787] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436283 ] 00:20:41.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.898 [2024-05-15 04:21:29.773744] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:41.898 [2024-05-15 04:21:29.773791] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:41.898 [2024-05-15 04:21:29.773801] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:41.898 [2024-05-15 04:21:29.773814] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:41.898 [2024-05-15 04:21:29.773825] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:41.898 [2024-05-15 04:21:29.774105] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:41.898 [2024-05-15 04:21:29.774143] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x890c80 0 00:20:41.898 [2024-05-15 04:21:29.780940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:41.898 [2024-05-15 04:21:29.780965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:41.898 [2024-05-15 04:21:29.780974] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:41.898 [2024-05-15 04:21:29.780984] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:41.898 [2024-05-15 04:21:29.781024] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.781036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.898 [2024-05-15 04:21:29.781042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.898 [2024-05-15 04:21:29.781057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:41.899 [2024-05-15 04:21:29.781083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.788941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.788959] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.788966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.788973] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.788992] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:41.899 [2024-05-15 04:21:29.789003] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:41.899 [2024-05-15 04:21:29.789013] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:41.899 [2024-05-15 04:21:29.789029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789038] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.899 [2024-05-15 04:21:29.789057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.899 [2024-05-15 04:21:29.789080] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.789281] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.789297] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.789304] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789310] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.789319] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:41.899 [2024-05-15 04:21:29.789332] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:41.899 [2024-05-15 04:21:29.789345] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789353] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.899 [2024-05-15 04:21:29.789370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.899 [2024-05-15 04:21:29.789391] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.789587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.789603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.789609] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.789625] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:41.899 [2024-05-15 04:21:29.789638] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:41.899 [2024-05-15 04:21:29.789655] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.899 [2024-05-15 04:21:29.789681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.899 [2024-05-15 04:21:29.789702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.789900] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.789913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.789919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789926] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.789944] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:41.899 [2024-05-15 04:21:29.789962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.789978] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.899 [2024-05-15 04:21:29.789989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.899 [2024-05-15 04:21:29.790010] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.790206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.790218] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.790225] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.790232] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.790239] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:41.899 [2024-05-15 04:21:29.790247] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:41.899 [2024-05-15 04:21:29.790260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:41.899 [2024-05-15 04:21:29.790370] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:41.899 [2024-05-15 04:21:29.790377] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:41.899 [2024-05-15 04:21:29.790389] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.790397] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.790419] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.899 [2024-05-15 04:21:29.790430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.899 [2024-05-15 04:21:29.790450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.790659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.790672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.790678] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.790685] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.790693] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:41.899 [2024-05-15 04:21:29.790714] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.790723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.790730] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.899 [2024-05-15 04:21:29.790741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.899 [2024-05-15 04:21:29.790762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.790967] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.790983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.790990] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.790996] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.791004] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:41.899 [2024-05-15 04:21:29.791012] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:41.899 [2024-05-15 04:21:29.791026] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:41.899 [2024-05-15 04:21:29.791040] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:41.899 [2024-05-15 04:21:29.791054] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.791062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.899 [2024-05-15 04:21:29.791073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.899 [2024-05-15 04:21:29.791094] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.899 [2024-05-15 04:21:29.791344] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.899 [2024-05-15 04:21:29.791359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.899 [2024-05-15 04:21:29.791366] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.791373] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=4096, cccid=0 00:20:41.899 [2024-05-15 04:21:29.791380] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8efe40) on tqpair(0x890c80): expected_datao=0, payload_size=4096 00:20:41.899 [2024-05-15 04:21:29.791388] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.791399] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.791406] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.791468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.899 [2024-05-15 04:21:29.791479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.899 [2024-05-15 04:21:29.791486] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.791492] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.899 [2024-05-15 04:21:29.791503] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:41.899 [2024-05-15 04:21:29.791512] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:41.899 [2024-05-15 04:21:29.791519] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:41.899 [2024-05-15 04:21:29.791526] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:41.899 [2024-05-15 04:21:29.791537] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:41.899 [2024-05-15 04:21:29.791546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:41.899 [2024-05-15 04:21:29.791564] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:41.899 [2024-05-15 04:21:29.791579] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.899 [2024-05-15 04:21:29.791588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.791605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.900 [2024-05-15 04:21:29.791626] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.900 [2024-05-15 04:21:29.791819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.900 [2024-05-15 04:21:29.791834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.900 [2024-05-15 04:21:29.791841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791848] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x890c80 00:20:41.900 [2024-05-15 04:21:29.791862] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.791888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.900 [2024-05-15 04:21:29.791898] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791905] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.791920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.900 [2024-05-15 04:21:29.791937] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791946] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791952] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.791961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.900 [2024-05-15 04:21:29.791971] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.791984] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.791992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.900 [2024-05-15 04:21:29.792001] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.792016] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.792028] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.792035] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.792045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.900 [2024-05-15 04:21:29.792071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:20:41.900 [2024-05-15 04:21:29.792082] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8effa0, cid 1, qid 0 00:20:41.900 [2024-05-15 04:21:29.792090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0100, cid 2, qid 0 00:20:41.900 [2024-05-15 04:21:29.792098] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.900 [2024-05-15 04:21:29.792106] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f03c0, cid 4, qid 0 00:20:41.900 [2024-05-15 04:21:29.792333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.900 [2024-05-15 04:21:29.792349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.900 [2024-05-15 04:21:29.792355] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.792362] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f03c0) on tqpair=0x890c80 00:20:41.900 [2024-05-15 04:21:29.792374] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:41.900 [2024-05-15 04:21:29.792384] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.792399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.792411] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.792422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.792430] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.792436] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.792447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.900 [2024-05-15 04:21:29.792468] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f03c0, cid 4, qid 0 00:20:41.900 [2024-05-15 04:21:29.792675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.900 [2024-05-15 04:21:29.792690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.900 [2024-05-15 04:21:29.792697] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.792703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f03c0) on tqpair=0x890c80 00:20:41.900 [2024-05-15 04:21:29.792762] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.792782] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.792798] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.792805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.792816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.900 [2024-05-15 04:21:29.792837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f03c0, cid 4, qid 0 00:20:41.900 [2024-05-15 04:21:29.796942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.900 [2024-05-15 04:21:29.796958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.900 [2024-05-15 04:21:29.796965] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.796972] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=4096, cccid=4 00:20:41.900 [2024-05-15 04:21:29.796980] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f03c0) on tqpair(0x890c80): expected_datao=0, payload_size=4096 00:20:41.900 [2024-05-15 04:21:29.796991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797002] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797010] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797018] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.900 [2024-05-15 04:21:29.797027] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.900 [2024-05-15 04:21:29.797034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797040] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f03c0) on tqpair=0x890c80 00:20:41.900 [2024-05-15 04:21:29.797062] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:41.900 [2024-05-15 04:21:29.797086] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.797105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.797119] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797127] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.797138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.900 [2024-05-15 04:21:29.797161] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f03c0, cid 4, qid 0 00:20:41.900 [2024-05-15 04:21:29.797390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.900 [2024-05-15 04:21:29.797406] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.900 [2024-05-15 04:21:29.797413] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797419] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=4096, cccid=4 00:20:41.900 [2024-05-15 04:21:29.797427] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f03c0) on tqpair(0x890c80): expected_datao=0, payload_size=4096 00:20:41.900 [2024-05-15 04:21:29.797434] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797444] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797452] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797523] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.900 [2024-05-15 04:21:29.797534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.900 [2024-05-15 04:21:29.797541] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797547] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f03c0) on tqpair=0x890c80 00:20:41.900 [2024-05-15 04:21:29.797564] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.797583] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:41.900 [2024-05-15 04:21:29.797597] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x890c80) 00:20:41.900 [2024-05-15 04:21:29.797615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.900 [2024-05-15 04:21:29.797636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f03c0, cid 4, qid 0 00:20:41.900 [2024-05-15 04:21:29.797811] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.900 [2024-05-15 04:21:29.797823] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.900 [2024-05-15 04:21:29.797833] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797840] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=4096, cccid=4 00:20:41.900 [2024-05-15 04:21:29.797847] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f03c0) on tqpair(0x890c80): expected_datao=0, payload_size=4096 00:20:41.900 [2024-05-15 04:21:29.797855] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797903] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.797913] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.900 [2024-05-15 04:21:29.798071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.900 [2024-05-15 04:21:29.798087] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.900 [2024-05-15 04:21:29.798093] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f03c0) on tqpair=0x890c80 00:20:41.901 [2024-05-15 04:21:29.798121] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:41.901 [2024-05-15 04:21:29.798138] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:41.901 [2024-05-15 04:21:29.798154] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:41.901 [2024-05-15 04:21:29.798165] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:41.901 [2024-05-15 04:21:29.798173] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:41.901 [2024-05-15 04:21:29.798183] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:41.901 [2024-05-15 04:21:29.798191] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:41.901 [2024-05-15 04:21:29.798200] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:41.901 [2024-05-15 04:21:29.798222] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.798258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.798270] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798277] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.798293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.901 [2024-05-15 04:21:29.798317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f03c0, cid 4, qid 0 00:20:41.901 [2024-05-15 04:21:29.798343] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0520, cid 5, qid 0 00:20:41.901 [2024-05-15 04:21:29.798543] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.901 [2024-05-15 04:21:29.798557] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.901 [2024-05-15 04:21:29.798563] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798570] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f03c0) on tqpair=0x890c80 00:20:41.901 [2024-05-15 04:21:29.798581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.901 [2024-05-15 04:21:29.798590] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.901 [2024-05-15 04:21:29.798600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798608] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0520) on tqpair=0x890c80 00:20:41.901 [2024-05-15 04:21:29.798623] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.798643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.798663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0520, cid 5, qid 0 00:20:41.901 [2024-05-15 04:21:29.798860] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.901 [2024-05-15 04:21:29.798872] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.901 [2024-05-15 04:21:29.798879] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0520) on tqpair=0x890c80 00:20:41.901 [2024-05-15 04:21:29.798900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.798909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.798920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.798948] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0520, cid 5, qid 0 00:20:41.901 [2024-05-15 04:21:29.799125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.901 [2024-05-15 04:21:29.799138] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.901 [2024-05-15 04:21:29.799144] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799151] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0520) on tqpair=0x890c80 00:20:41.901 [2024-05-15 04:21:29.799166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799175] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.799186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.799206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0520, cid 5, qid 0 00:20:41.901 [2024-05-15 04:21:29.799366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.901 [2024-05-15 04:21:29.799378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.901 [2024-05-15 04:21:29.799385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0520) on tqpair=0x890c80 00:20:41.901 [2024-05-15 04:21:29.799410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799420] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.799430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.799442] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.799459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.799470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.799491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.799507] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799516] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x890c80) 00:20:41.901 [2024-05-15 04:21:29.799526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.901 [2024-05-15 04:21:29.799563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0520, cid 5, qid 0 00:20:41.901 [2024-05-15 04:21:29.799575] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f03c0, cid 4, qid 0 00:20:41.901 [2024-05-15 04:21:29.799582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0680, cid 6, qid 0 00:20:41.901 [2024-05-15 04:21:29.799589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f07e0, cid 7, qid 0 00:20:41.901 [2024-05-15 04:21:29.799893] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.901 [2024-05-15 04:21:29.799906] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.901 [2024-05-15 04:21:29.799913] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.799919] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=8192, cccid=5 00:20:41.901 [2024-05-15 04:21:29.799926] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f0520) on tqpair(0x890c80): expected_datao=0, payload_size=8192 00:20:41.901 [2024-05-15 04:21:29.799943] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.800075] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.800086] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.800095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.901 [2024-05-15 04:21:29.800104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.901 [2024-05-15 04:21:29.800110] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.800117] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=512, cccid=4 00:20:41.901 [2024-05-15 04:21:29.800126] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f03c0) on tqpair(0x890c80): expected_datao=0, payload_size=512 00:20:41.901 [2024-05-15 04:21:29.800133] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.800142] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.800149] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.901 [2024-05-15 04:21:29.800158] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.901 [2024-05-15 04:21:29.800166] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.902 [2024-05-15 04:21:29.800173] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800179] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=512, cccid=6 00:20:41.902 [2024-05-15 04:21:29.800187] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f0680) on tqpair(0x890c80): expected_datao=0, payload_size=512 00:20:41.902 [2024-05-15 04:21:29.800194] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800203] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800209] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:41.902 [2024-05-15 04:21:29.800226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:41.902 [2024-05-15 04:21:29.800233] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800239] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x890c80): datao=0, datal=4096, cccid=7 00:20:41.902 [2024-05-15 04:21:29.800250] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f07e0) on tqpair(0x890c80): expected_datao=0, payload_size=4096 00:20:41.902 [2024-05-15 04:21:29.800258] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800268] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800275] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.902 [2024-05-15 04:21:29.800296] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.902 [2024-05-15 04:21:29.800303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800309] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0520) on tqpair=0x890c80 00:20:41.902 [2024-05-15 04:21:29.800328] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.902 [2024-05-15 04:21:29.800341] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.902 [2024-05-15 04:21:29.800347] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f03c0) on tqpair=0x890c80 00:20:41.902 [2024-05-15 04:21:29.800368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.902 [2024-05-15 04:21:29.800378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.902 [2024-05-15 04:21:29.800385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0680) on tqpair=0x890c80 00:20:41.902 [2024-05-15 04:21:29.800405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.902 [2024-05-15 04:21:29.800416] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.902 [2024-05-15 04:21:29.800422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.902 [2024-05-15 04:21:29.800429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f07e0) on tqpair=0x890c80 00:20:41.902 ===================================================== 00:20:41.902 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.902 ===================================================== 00:20:41.902 Controller Capabilities/Features 00:20:41.902 ================================ 00:20:41.902 Vendor ID: 8086 00:20:41.902 Subsystem Vendor ID: 8086 00:20:41.902 Serial Number: SPDK00000000000001 00:20:41.902 Model Number: SPDK bdev Controller 00:20:41.902 Firmware Version: 24.05 00:20:41.902 Recommended Arb Burst: 6 00:20:41.902 IEEE OUI Identifier: e4 d2 5c 00:20:41.902 Multi-path I/O 00:20:41.902 May have multiple subsystem ports: Yes 00:20:41.902 May have multiple controllers: Yes 00:20:41.902 Associated with SR-IOV VF: No 00:20:41.902 Max Data Transfer Size: 131072 00:20:41.902 Max Number of Namespaces: 32 00:20:41.902 Max Number of I/O Queues: 127 00:20:41.902 NVMe Specification Version (VS): 1.3 00:20:41.902 NVMe Specification Version (Identify): 1.3 00:20:41.902 Maximum Queue Entries: 128 00:20:41.902 Contiguous Queues Required: Yes 00:20:41.902 Arbitration Mechanisms Supported 00:20:41.902 Weighted Round Robin: Not Supported 00:20:41.902 Vendor Specific: Not Supported 00:20:41.902 Reset Timeout: 15000 ms 00:20:41.902 Doorbell Stride: 4 bytes 00:20:41.902 NVM Subsystem Reset: Not Supported 00:20:41.902 Command Sets Supported 00:20:41.902 NVM Command Set: Supported 00:20:41.902 Boot Partition: Not Supported 00:20:41.902 Memory Page Size Minimum: 4096 bytes 00:20:41.902 Memory Page Size Maximum: 4096 bytes 00:20:41.902 Persistent Memory Region: Not Supported 00:20:41.902 Optional Asynchronous Events Supported 00:20:41.902 Namespace Attribute Notices: Supported 00:20:41.902 Firmware Activation Notices: Not Supported 00:20:41.902 ANA Change Notices: Not Supported 00:20:41.902 PLE Aggregate Log Change Notices: Not Supported 00:20:41.902 LBA Status Info Alert Notices: Not Supported 00:20:41.902 EGE Aggregate Log Change Notices: Not Supported 00:20:41.902 Normal NVM Subsystem Shutdown event: Not Supported 00:20:41.902 Zone Descriptor Change Notices: Not Supported 00:20:41.902 Discovery Log Change Notices: Not Supported 00:20:41.902 Controller Attributes 00:20:41.902 128-bit Host Identifier: Supported 00:20:41.902 Non-Operational Permissive Mode: Not Supported 00:20:41.902 NVM Sets: Not Supported 00:20:41.902 Read Recovery Levels: Not Supported 00:20:41.902 Endurance Groups: Not Supported 00:20:41.902 Predictable Latency Mode: Not Supported 00:20:41.902 Traffic Based Keep ALive: Not Supported 00:20:41.902 Namespace Granularity: Not Supported 00:20:41.902 SQ Associations: Not Supported 00:20:41.902 UUID List: Not Supported 00:20:41.902 Multi-Domain Subsystem: Not Supported 00:20:41.902 Fixed Capacity Management: Not Supported 00:20:41.902 Variable Capacity Management: Not Supported 00:20:41.902 Delete Endurance Group: Not Supported 00:20:41.902 Delete NVM Set: Not Supported 00:20:41.902 Extended LBA Formats Supported: Not Supported 00:20:41.902 Flexible Data Placement Supported: Not Supported 00:20:41.902 00:20:41.902 Controller Memory Buffer Support 00:20:41.902 ================================ 00:20:41.902 Supported: No 00:20:41.902 00:20:41.902 Persistent Memory Region Support 00:20:41.902 ================================ 00:20:41.902 Supported: No 00:20:41.902 00:20:41.902 Admin Command Set Attributes 00:20:41.902 ============================ 00:20:41.902 Security Send/Receive: Not Supported 00:20:41.902 Format NVM: Not Supported 00:20:41.902 Firmware Activate/Download: Not Supported 00:20:41.902 Namespace Management: Not Supported 00:20:41.902 Device Self-Test: Not Supported 00:20:41.902 Directives: Not Supported 00:20:41.902 NVMe-MI: Not Supported 00:20:41.902 Virtualization Management: Not Supported 00:20:41.902 Doorbell Buffer Config: Not Supported 00:20:41.902 Get LBA Status Capability: Not Supported 00:20:41.902 Command & Feature Lockdown Capability: Not Supported 00:20:41.902 Abort Command Limit: 4 00:20:41.902 Async Event Request Limit: 4 00:20:41.902 Number of Firmware Slots: N/A 00:20:41.902 Firmware Slot 1 Read-Only: N/A 00:20:41.902 Firmware Activation Without Reset: N/A 00:20:41.902 Multiple Update Detection Support: N/A 00:20:41.902 Firmware Update Granularity: No Information Provided 00:20:41.902 Per-Namespace SMART Log: No 00:20:41.902 Asymmetric Namespace Access Log Page: Not Supported 00:20:41.902 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:41.902 Command Effects Log Page: Supported 00:20:41.902 Get Log Page Extended Data: Supported 00:20:41.902 Telemetry Log Pages: Not Supported 00:20:41.902 Persistent Event Log Pages: Not Supported 00:20:41.902 Supported Log Pages Log Page: May Support 00:20:41.902 Commands Supported & Effects Log Page: Not Supported 00:20:41.902 Feature Identifiers & Effects Log Page:May Support 00:20:41.902 NVMe-MI Commands & Effects Log Page: May Support 00:20:41.902 Data Area 4 for Telemetry Log: Not Supported 00:20:41.902 Error Log Page Entries Supported: 128 00:20:41.902 Keep Alive: Supported 00:20:41.902 Keep Alive Granularity: 10000 ms 00:20:41.902 00:20:41.902 NVM Command Set Attributes 00:20:41.902 ========================== 00:20:41.902 Submission Queue Entry Size 00:20:41.902 Max: 64 00:20:41.902 Min: 64 00:20:41.902 Completion Queue Entry Size 00:20:41.902 Max: 16 00:20:41.902 Min: 16 00:20:41.902 Number of Namespaces: 32 00:20:41.902 Compare Command: Supported 00:20:41.902 Write Uncorrectable Command: Not Supported 00:20:41.902 Dataset Management Command: Supported 00:20:41.902 Write Zeroes Command: Supported 00:20:41.902 Set Features Save Field: Not Supported 00:20:41.902 Reservations: Supported 00:20:41.902 Timestamp: Not Supported 00:20:41.902 Copy: Supported 00:20:41.902 Volatile Write Cache: Present 00:20:41.902 Atomic Write Unit (Normal): 1 00:20:41.902 Atomic Write Unit (PFail): 1 00:20:41.902 Atomic Compare & Write Unit: 1 00:20:41.902 Fused Compare & Write: Supported 00:20:41.902 Scatter-Gather List 00:20:41.902 SGL Command Set: Supported 00:20:41.902 SGL Keyed: Supported 00:20:41.902 SGL Bit Bucket Descriptor: Not Supported 00:20:41.902 SGL Metadata Pointer: Not Supported 00:20:41.902 Oversized SGL: Not Supported 00:20:41.902 SGL Metadata Address: Not Supported 00:20:41.902 SGL Offset: Supported 00:20:41.902 Transport SGL Data Block: Not Supported 00:20:41.902 Replay Protected Memory Block: Not Supported 00:20:41.902 00:20:41.902 Firmware Slot Information 00:20:41.902 ========================= 00:20:41.902 Active slot: 1 00:20:41.902 Slot 1 Firmware Revision: 24.05 00:20:41.902 00:20:41.902 00:20:41.902 Commands Supported and Effects 00:20:41.902 ============================== 00:20:41.902 Admin Commands 00:20:41.902 -------------- 00:20:41.902 Get Log Page (02h): Supported 00:20:41.902 Identify (06h): Supported 00:20:41.902 Abort (08h): Supported 00:20:41.903 Set Features (09h): Supported 00:20:41.903 Get Features (0Ah): Supported 00:20:41.903 Asynchronous Event Request (0Ch): Supported 00:20:41.903 Keep Alive (18h): Supported 00:20:41.903 I/O Commands 00:20:41.903 ------------ 00:20:41.903 Flush (00h): Supported LBA-Change 00:20:41.903 Write (01h): Supported LBA-Change 00:20:41.903 Read (02h): Supported 00:20:41.903 Compare (05h): Supported 00:20:41.903 Write Zeroes (08h): Supported LBA-Change 00:20:41.903 Dataset Management (09h): Supported LBA-Change 00:20:41.903 Copy (19h): Supported LBA-Change 00:20:41.903 Unknown (79h): Supported LBA-Change 00:20:41.903 Unknown (7Ah): Supported 00:20:41.903 00:20:41.903 Error Log 00:20:41.903 ========= 00:20:41.903 00:20:41.903 Arbitration 00:20:41.903 =========== 00:20:41.903 Arbitration Burst: 1 00:20:41.903 00:20:41.903 Power Management 00:20:41.903 ================ 00:20:41.903 Number of Power States: 1 00:20:41.903 Current Power State: Power State #0 00:20:41.903 Power State #0: 00:20:41.903 Max Power: 0.00 W 00:20:41.903 Non-Operational State: Operational 00:20:41.903 Entry Latency: Not Reported 00:20:41.903 Exit Latency: Not Reported 00:20:41.903 Relative Read Throughput: 0 00:20:41.903 Relative Read Latency: 0 00:20:41.903 Relative Write Throughput: 0 00:20:41.903 Relative Write Latency: 0 00:20:41.903 Idle Power: Not Reported 00:20:41.903 Active Power: Not Reported 00:20:41.903 Non-Operational Permissive Mode: Not Supported 00:20:41.903 00:20:41.903 Health Information 00:20:41.903 ================== 00:20:41.903 Critical Warnings: 00:20:41.903 Available Spare Space: OK 00:20:41.903 Temperature: OK 00:20:41.903 Device Reliability: OK 00:20:41.903 Read Only: No 00:20:41.903 Volatile Memory Backup: OK 00:20:41.903 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:41.903 Temperature Threshold: [2024-05-15 04:21:29.800555] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.800567] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.800578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.800601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f07e0, cid 7, qid 0 00:20:41.903 [2024-05-15 04:21:29.800816] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.800831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.800838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.800845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f07e0) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.800886] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:41.903 [2024-05-15 04:21:29.800908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.903 [2024-05-15 04:21:29.800920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.903 [2024-05-15 04:21:29.804939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.903 [2024-05-15 04:21:29.804957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.903 [2024-05-15 04:21:29.804970] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.804995] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805002] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.805017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.805041] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.805239] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.805251] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.805258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805265] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.805276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805284] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.805300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.805326] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.805504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.805520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.805526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.805541] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:41.903 [2024-05-15 04:21:29.805549] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:41.903 [2024-05-15 04:21:29.805565] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805573] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805580] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.805590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.805611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.805799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.805814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.805821] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805827] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.805843] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805853] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.805859] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.805870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.805890] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.806061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.806075] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.806082] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806088] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.806104] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806113] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806124] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.806135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.806155] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.806347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.806358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.806365] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.806388] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806397] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806403] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.806414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.806433] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.806588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.806600] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.806607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.806629] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.806655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.806675] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.806832] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.806844] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.806850] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806857] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.903 [2024-05-15 04:21:29.806872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806881] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.903 [2024-05-15 04:21:29.806888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.903 [2024-05-15 04:21:29.806899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.903 [2024-05-15 04:21:29.806919] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.903 [2024-05-15 04:21:29.807083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.903 [2024-05-15 04:21:29.807099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.903 [2024-05-15 04:21:29.807106] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807112] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.807129] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807138] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807144] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.807159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.807180] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.807378] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.807390] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.807397] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.807419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807428] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807434] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.807445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.807465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.807617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.807629] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.807636] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807642] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.807658] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.807684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.807703] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.807859] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.807871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.807878] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807884] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.807900] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.807915] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.807926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.807953] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.808133] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.808148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.808155] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808161] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.808178] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808187] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808194] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.808204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.808228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.808389] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.808401] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.808408] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808415] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.808430] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808439] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.808456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.808477] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.808631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.808646] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.808653] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808659] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.808676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808685] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.808702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.808722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.808876] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.808891] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.808898] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.808904] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.808920] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.812939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.812953] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x890c80) 00:20:41.904 [2024-05-15 04:21:29.812965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.904 [2024-05-15 04:21:29.812988] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0260, cid 3, qid 0 00:20:41.904 [2024-05-15 04:21:29.813185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:41.904 [2024-05-15 04:21:29.813200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:41.904 [2024-05-15 04:21:29.813207] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:41.904 [2024-05-15 04:21:29.813214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f0260) on tqpair=0x890c80 00:20:41.904 [2024-05-15 04:21:29.813227] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:41.904 0 Kelvin (-273 Celsius) 00:20:41.904 Available Spare: 0% 00:20:41.904 Available Spare Threshold: 0% 00:20:41.904 Life Percentage Used: 0% 00:20:41.904 Data Units Read: 0 00:20:41.904 Data Units Written: 0 00:20:41.904 Host Read Commands: 0 00:20:41.904 Host Write Commands: 0 00:20:41.904 Controller Busy Time: 0 minutes 00:20:41.904 Power Cycles: 0 00:20:41.904 Power On Hours: 0 hours 00:20:41.904 Unsafe Shutdowns: 0 00:20:41.904 Unrecoverable Media Errors: 0 00:20:41.904 Lifetime Error Log Entries: 0 00:20:41.904 Warning Temperature Time: 0 minutes 00:20:41.904 Critical Temperature Time: 0 minutes 00:20:41.904 00:20:41.904 Number of Queues 00:20:41.904 ================ 00:20:41.904 Number of I/O Submission Queues: 127 00:20:41.904 Number of I/O Completion Queues: 127 00:20:41.904 00:20:41.904 Active Namespaces 00:20:41.904 ================= 00:20:41.904 Namespace ID:1 00:20:41.904 Error Recovery Timeout: Unlimited 00:20:41.904 Command Set Identifier: NVM (00h) 00:20:41.904 Deallocate: Supported 00:20:41.904 Deallocated/Unwritten Error: Not Supported 00:20:41.904 Deallocated Read Value: Unknown 00:20:41.904 Deallocate in Write Zeroes: Not Supported 00:20:41.904 Deallocated Guard Field: 0xFFFF 00:20:41.904 Flush: Supported 00:20:41.904 Reservation: Supported 00:20:41.904 Namespace Sharing Capabilities: Multiple Controllers 00:20:41.904 Size (in LBAs): 131072 (0GiB) 00:20:41.904 Capacity (in LBAs): 131072 (0GiB) 00:20:41.904 Utilization (in LBAs): 131072 (0GiB) 00:20:41.904 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:41.904 EUI64: ABCDEF0123456789 00:20:41.904 UUID: 5edebdb3-be86-4bb4-b9c8-8de589d5a4ea 00:20:41.904 Thin Provisioning: Not Supported 00:20:41.904 Per-NS Atomic Units: Yes 00:20:41.904 Atomic Boundary Size (Normal): 0 00:20:41.904 Atomic Boundary Size (PFail): 0 00:20:41.904 Atomic Boundary Offset: 0 00:20:41.904 Maximum Single Source Range Length: 65535 00:20:41.904 Maximum Copy Length: 65535 00:20:41.904 Maximum Source Range Count: 1 00:20:41.904 NGUID/EUI64 Never Reused: No 00:20:41.904 Namespace Write Protected: No 00:20:41.904 Number of LBA Formats: 1 00:20:41.904 Current LBA Format: LBA Format #00 00:20:41.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:41.904 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:41.904 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.905 rmmod nvme_tcp 00:20:41.905 rmmod nvme_fabrics 00:20:41.905 rmmod nvme_keyring 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3436129 ']' 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3436129 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3436129 ']' 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3436129 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.905 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3436129 00:20:42.163 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:42.163 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:42.163 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3436129' 00:20:42.163 killing process with pid 3436129 00:20:42.163 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3436129 00:20:42.163 [2024-05-15 04:21:29.913180] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:42.163 04:21:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3436129 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.421 04:21:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.322 04:21:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.322 00:20:44.322 real 0m6.546s 00:20:44.322 user 0m7.364s 00:20:44.322 sys 0m2.230s 00:20:44.322 04:21:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:44.322 04:21:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:44.322 ************************************ 00:20:44.322 END TEST nvmf_identify 00:20:44.322 ************************************ 00:20:44.322 04:21:32 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:44.322 04:21:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:44.322 04:21:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:44.322 04:21:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:44.322 ************************************ 00:20:44.322 START TEST nvmf_perf 00:20:44.322 ************************************ 00:20:44.322 04:21:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:44.322 * Looking for test storage... 00:20:44.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.581 04:21:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:47.110 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.110 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:47.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:47.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:47.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:20:47.111 00:20:47.111 --- 10.0.0.2 ping statistics --- 00:20:47.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.111 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:20:47.111 00:20:47.111 --- 10.0.0.1 ping statistics --- 00:20:47.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.111 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3438619 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3438619 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3438619 ']' 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:47.111 04:21:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.111 [2024-05-15 04:21:34.977191] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:20:47.111 [2024-05-15 04:21:34.977272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.111 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.111 [2024-05-15 04:21:35.054209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.369 [2024-05-15 04:21:35.166533] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.369 [2024-05-15 04:21:35.166583] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.369 [2024-05-15 04:21:35.166612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.369 [2024-05-15 04:21:35.166624] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.369 [2024-05-15 04:21:35.166633] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.369 [2024-05-15 04:21:35.166725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.369 [2024-05-15 04:21:35.166791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.369 [2024-05-15 04:21:35.166838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.369 [2024-05-15 04:21:35.166840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:47.934 04:21:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:51.209 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:51.209 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:51.467 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:51.467 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:51.724 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:51.724 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:51.724 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:51.724 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:51.724 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:51.982 [2024-05-15 04:21:39.777342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.982 04:21:39 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.240 04:21:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:52.240 04:21:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:52.497 04:21:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:52.497 04:21:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:52.755 04:21:40 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.012 [2024-05-15 04:21:40.776742] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:53.012 [2024-05-15 04:21:40.777059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.012 04:21:40 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:53.270 04:21:41 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:53.270 04:21:41 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:53.270 04:21:41 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:53.270 04:21:41 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:54.640 Initializing NVMe Controllers 00:20:54.640 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:54.640 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:54.640 Initialization complete. Launching workers. 00:20:54.640 ======================================================== 00:20:54.640 Latency(us) 00:20:54.640 Device Information : IOPS MiB/s Average min max 00:20:54.640 PCIE (0000:88:00.0) NSID 1 from core 0: 84691.28 330.83 377.17 22.79 6020.45 00:20:54.640 ======================================================== 00:20:54.640 Total : 84691.28 330.83 377.17 22.79 6020.45 00:20:54.640 00:20:54.640 04:21:42 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:54.640 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.572 Initializing NVMe Controllers 00:20:55.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:55.572 Initialization complete. Launching workers. 00:20:55.572 ======================================================== 00:20:55.572 Latency(us) 00:20:55.572 Device Information : IOPS MiB/s Average min max 00:20:55.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.78 0.25 16469.73 226.48 45698.68 00:20:55.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.78 0.25 16053.62 6969.25 48883.67 00:20:55.572 ======================================================== 00:20:55.572 Total : 126.55 0.49 16260.04 226.48 48883.67 00:20:55.572 00:20:55.572 04:21:43 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.572 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.943 Initializing NVMe Controllers 00:20:56.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:56.943 Initialization complete. Launching workers. 00:20:56.943 ======================================================== 00:20:56.943 Latency(us) 00:20:56.943 Device Information : IOPS MiB/s Average min max 00:20:56.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7286.71 28.46 4396.28 672.73 10444.71 00:20:56.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3806.28 14.87 8429.98 6562.35 17024.16 00:20:56.943 ======================================================== 00:20:56.943 Total : 11093.00 43.33 5780.34 672.73 17024.16 00:20:56.943 00:20:56.943 04:21:44 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:56.943 04:21:44 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:56.943 04:21:44 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.943 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.469 Initializing NVMe Controllers 00:20:59.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.469 Controller IO queue size 128, less than required. 00:20:59.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.469 Controller IO queue size 128, less than required. 00:20:59.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:59.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:59.469 Initialization complete. Launching workers. 00:20:59.469 ======================================================== 00:20:59.469 Latency(us) 00:20:59.469 Device Information : IOPS MiB/s Average min max 00:20:59.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 799.40 199.85 165213.22 98753.44 221871.15 00:20:59.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.43 144.36 228973.47 87263.21 347130.85 00:20:59.469 ======================================================== 00:20:59.469 Total : 1376.83 344.21 191953.63 87263.21 347130.85 00:20:59.469 00:20:59.469 04:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:59.469 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.726 No valid NVMe controllers or AIO or URING devices found 00:20:59.726 Initializing NVMe Controllers 00:20:59.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.726 Controller IO queue size 128, less than required. 00:20:59.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.726 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:59.726 Controller IO queue size 128, less than required. 00:20:59.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.726 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:59.726 WARNING: Some requested NVMe devices were skipped 00:20:59.983 04:21:47 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:59.983 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.511 Initializing NVMe Controllers 00:21:02.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.511 Controller IO queue size 128, less than required. 00:21:02.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.511 Controller IO queue size 128, less than required. 00:21:02.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:02.511 Initialization complete. Launching workers. 00:21:02.511 00:21:02.511 ==================== 00:21:02.511 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:02.511 TCP transport: 00:21:02.511 polls: 38468 00:21:02.511 idle_polls: 13484 00:21:02.511 sock_completions: 24984 00:21:02.511 nvme_completions: 3319 00:21:02.511 submitted_requests: 4992 00:21:02.511 queued_requests: 1 00:21:02.511 00:21:02.511 ==================== 00:21:02.511 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:02.511 TCP transport: 00:21:02.511 polls: 38868 00:21:02.511 idle_polls: 14455 00:21:02.511 sock_completions: 24413 00:21:02.511 nvme_completions: 3227 00:21:02.511 submitted_requests: 4852 00:21:02.511 queued_requests: 1 00:21:02.511 ======================================================== 00:21:02.511 Latency(us) 00:21:02.511 Device Information : IOPS MiB/s Average min max 00:21:02.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 828.95 207.24 159098.90 82139.80 231610.34 00:21:02.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 805.97 201.49 162468.02 79522.20 261917.84 00:21:02.511 ======================================================== 00:21:02.511 Total : 1634.93 408.73 160759.78 79522.20 261917.84 00:21:02.511 00:21:02.511 04:21:50 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:02.511 04:21:50 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.769 rmmod nvme_tcp 00:21:02.769 rmmod nvme_fabrics 00:21:02.769 rmmod nvme_keyring 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3438619 ']' 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3438619 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3438619 ']' 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3438619 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3438619 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3438619' 00:21:02.769 killing process with pid 3438619 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3438619 00:21:02.769 [2024-05-15 04:21:50.701275] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:02.769 04:21:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3438619 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.713 04:21:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.616 04:21:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.616 00:21:06.616 real 0m22.111s 00:21:06.616 user 1m8.315s 00:21:06.616 sys 0m5.028s 00:21:06.616 04:21:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:06.616 04:21:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:06.616 ************************************ 00:21:06.616 END TEST nvmf_perf 00:21:06.616 ************************************ 00:21:06.617 04:21:54 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:06.617 04:21:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:06.617 04:21:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:06.617 04:21:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.617 ************************************ 00:21:06.617 START TEST nvmf_fio_host 00:21:06.617 ************************************ 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:06.617 * Looking for test storage... 00:21:06.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.617 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.618 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.618 04:21:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.618 04:21:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.618 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.618 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.618 04:21:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.618 04:21:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:09.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:09.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:09.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:09.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:09.180 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.181 04:21:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:09.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:21:09.181 00:21:09.181 --- 10.0.0.2 ping statistics --- 00:21:09.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.181 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:21:09.181 00:21:09.181 --- 10.0.0.1 ping statistics --- 00:21:09.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.181 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3442948 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3442948 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3442948 ']' 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:09.181 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.181 [2024-05-15 04:21:57.188187] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:09.181 [2024-05-15 04:21:57.188272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.438 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.438 [2024-05-15 04:21:57.270290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:09.438 [2024-05-15 04:21:57.390350] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.438 [2024-05-15 04:21:57.390401] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.439 [2024-05-15 04:21:57.390418] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.439 [2024-05-15 04:21:57.390431] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.439 [2024-05-15 04:21:57.390442] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.439 [2024-05-15 04:21:57.390496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.439 [2024-05-15 04:21:57.390535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.439 [2024-05-15 04:21:57.390599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.439 [2024-05-15 04:21:57.390602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.696 [2024-05-15 04:21:57.512454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.696 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.697 Malloc1 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.697 [2024-05-15 04:21:57.583196] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:09.697 [2024-05-15 04:21:57.583492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:09.697 04:21:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.954 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:09.954 fio-3.35 00:21:09.954 Starting 1 thread 00:21:09.954 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.481 00:21:12.481 test: (groupid=0, jobs=1): err= 0: pid=3443101: Wed May 15 04:22:00 2024 00:21:12.481 read: IOPS=9139, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2006msec) 00:21:12.481 slat (nsec): min=1977, max=181985, avg=2578.85, stdev=1928.61 00:21:12.481 clat (usec): min=2579, max=13903, avg=7755.25, stdev=578.95 00:21:12.481 lat (usec): min=2602, max=13906, avg=7757.83, stdev=578.84 00:21:12.481 clat percentiles (usec): 00:21:12.481 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:21:12.481 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:21:12.481 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:21:12.481 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11600], 99.95th=[12256], 00:21:12.481 | 99.99th=[13829] 00:21:12.481 bw ( KiB/s): min=35648, max=37152, per=99.91%, avg=36524.00, stdev=635.33, samples=4 00:21:12.481 iops : min= 8912, max= 9288, avg=9131.00, stdev=158.83, samples=4 00:21:12.481 write: IOPS=9147, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2006msec); 0 zone resets 00:21:12.481 slat (usec): min=2, max=142, avg= 2.71, stdev= 1.46 00:21:12.481 clat (usec): min=1710, max=12171, avg=6207.43, stdev=520.35 00:21:12.481 lat (usec): min=1717, max=12174, avg=6210.14, stdev=520.34 00:21:12.481 clat percentiles (usec): 00:21:12.481 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5800], 00:21:12.481 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:21:12.481 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:21:12.481 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[10814], 99.95th=[11469], 00:21:12.481 | 99.99th=[12125] 00:21:12.481 bw ( KiB/s): min=36464, max=36800, per=99.99%, avg=36588.00, stdev=155.33, samples=4 00:21:12.481 iops : min= 9116, max= 9200, avg=9147.00, stdev=38.83, samples=4 00:21:12.481 lat (msec) : 2=0.02%, 4=0.10%, 10=99.71%, 20=0.17% 00:21:12.481 cpu : usr=53.72%, sys=36.76%, ctx=75, majf=0, minf=5 00:21:12.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:12.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:12.481 issued rwts: total=18333,18350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:12.481 00:21:12.481 Run status group 0 (all jobs): 00:21:12.482 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2006-2006msec 00:21:12.482 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2006-2006msec 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:12.482 04:22:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:12.482 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:12.482 fio-3.35 00:21:12.482 Starting 1 thread 00:21:12.482 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.007 00:21:15.007 test: (groupid=0, jobs=1): err= 0: pid=3443559: Wed May 15 04:22:02 2024 00:21:15.007 read: IOPS=7726, BW=121MiB/s (127MB/s)(242MiB/2008msec) 00:21:15.007 slat (usec): min=2, max=122, avg= 3.70, stdev= 2.00 00:21:15.007 clat (usec): min=3647, max=22747, avg=10179.41, stdev=2921.45 00:21:15.007 lat (usec): min=3650, max=22752, avg=10183.11, stdev=2921.57 00:21:15.007 clat percentiles (usec): 00:21:15.007 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7767], 00:21:15.007 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10421], 00:21:15.007 | 70.00th=[11207], 80.00th=[12256], 90.00th=[14222], 95.00th=[15926], 00:21:15.007 | 99.00th=[18482], 99.50th=[19006], 99.90th=[22676], 99.95th=[22676], 00:21:15.007 | 99.99th=[22676] 00:21:15.007 bw ( KiB/s): min=54048, max=66016, per=50.21%, avg=62072.00, stdev=5424.13, samples=4 00:21:15.007 iops : min= 3378, max= 4126, avg=3879.50, stdev=339.01, samples=4 00:21:15.007 write: IOPS=4465, BW=69.8MiB/s (73.2MB/s)(127MiB/1824msec); 0 zone resets 00:21:15.007 slat (usec): min=30, max=149, avg=34.39, stdev= 5.79 00:21:15.007 clat (usec): min=7030, max=19899, avg=11292.08, stdev=1813.40 00:21:15.007 lat (usec): min=7061, max=19931, avg=11326.47, stdev=1813.82 00:21:15.007 clat percentiles (usec): 00:21:15.007 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:21:15.007 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:21:15.007 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13698], 95.00th=[14746], 00:21:15.007 | 99.00th=[16188], 99.50th=[16319], 99.90th=[17433], 99.95th=[18744], 00:21:15.007 | 99.99th=[19792] 00:21:15.007 bw ( KiB/s): min=56544, max=68864, per=90.77%, avg=64856.00, stdev=5629.66, samples=4 00:21:15.008 iops : min= 3534, max= 4304, avg=4053.50, stdev=351.85, samples=4 00:21:15.008 lat (msec) : 4=0.05%, 10=43.53%, 20=56.27%, 50=0.15% 00:21:15.008 cpu : usr=73.94%, sys=21.92%, ctx=22, majf=0, minf=1 00:21:15.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:15.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:15.008 issued rwts: total=15514,8145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:15.008 00:21:15.008 Run status group 0 (all jobs): 00:21:15.008 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=242MiB (254MB), run=2008-2008msec 00:21:15.008 WRITE: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=127MiB (133MB), run=1824-1824msec 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:15.008 rmmod nvme_tcp 00:21:15.008 rmmod nvme_fabrics 00:21:15.008 rmmod nvme_keyring 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3442948 ']' 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3442948 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3442948 ']' 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3442948 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3442948 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3442948' 00:21:15.008 killing process with pid 3442948 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3442948 00:21:15.008 [2024-05-15 04:22:02.845405] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:15.008 04:22:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3442948 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.267 04:22:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.801 04:22:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:17.801 00:21:17.801 real 0m10.751s 00:21:17.801 user 0m26.850s 00:21:17.801 sys 0m4.047s 00:21:17.801 04:22:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:17.801 04:22:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.801 ************************************ 00:21:17.801 END TEST nvmf_fio_host 00:21:17.801 ************************************ 00:21:17.801 04:22:05 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:17.801 04:22:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:17.801 04:22:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:17.801 04:22:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:17.801 ************************************ 00:21:17.801 START TEST nvmf_failover 00:21:17.801 ************************************ 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:17.801 * Looking for test storage... 00:21:17.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.801 04:22:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.706 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.706 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:19.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:19.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:19.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:19.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.707 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:19.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:21:19.967 00:21:19.967 --- 10.0.0.2 ping statistics --- 00:21:19.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.967 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:21:19.967 00:21:19.967 --- 10.0.0.1 ping statistics --- 00:21:19.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.967 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3446036 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3446036 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3446036 ']' 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:19.967 04:22:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.967 [2024-05-15 04:22:07.857028] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:19.967 [2024-05-15 04:22:07.857102] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.967 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.967 [2024-05-15 04:22:07.932054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:20.226 [2024-05-15 04:22:08.039294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.226 [2024-05-15 04:22:08.039350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.226 [2024-05-15 04:22:08.039363] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.226 [2024-05-15 04:22:08.039374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.226 [2024-05-15 04:22:08.039383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.226 [2024-05-15 04:22:08.039466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.226 [2024-05-15 04:22:08.039533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.226 [2024-05-15 04:22:08.039530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.226 04:22:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:20.226 04:22:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:20.226 04:22:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.226 04:22:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.226 04:22:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:20.226 04:22:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.226 04:22:08 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:20.484 [2024-05-15 04:22:08.401820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.484 04:22:08 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:20.742 Malloc0 00:21:20.742 04:22:08 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.999 04:22:08 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.257 04:22:09 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.515 [2024-05-15 04:22:09.460528] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:21.515 [2024-05-15 04:22:09.460789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.515 04:22:09 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.773 [2024-05-15 04:22:09.713495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.773 04:22:09 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:22.031 [2024-05-15 04:22:09.958330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3446325 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3446325 /var/tmp/bdevperf.sock 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3446325 ']' 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.031 04:22:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:22.963 04:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:22.963 04:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:22.963 04:22:10 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.527 NVMe0n1 00:21:23.527 04:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.785 00:21:23.785 04:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3446468 00:21:23.785 04:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.785 04:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:24.753 04:22:12 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.012 [2024-05-15 04:22:12.816389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.012 [2024-05-15 04:22:12.816735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.816991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 [2024-05-15 04:22:12.817353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55ea0 is same with the state(5) to be set 00:21:25.013 04:22:12 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:28.292 04:22:15 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.292 00:21:28.292 04:22:16 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:28.550 [2024-05-15 04:22:16.503047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 [2024-05-15 04:22:16.503511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56730 is same with the state(5) to be set 00:21:28.550 04:22:16 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:31.830 04:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.830 [2024-05-15 04:22:19.803275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.830 04:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:33.205 04:22:20 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:33.205 [2024-05-15 04:22:21.082664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.082992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 [2024-05-15 04:22:21.083330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfbfb0 is same with the state(5) to be set 00:21:33.205 04:22:21 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3446468 00:21:39.770 0 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3446325 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3446325 ']' 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3446325 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3446325 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3446325' 00:21:39.770 killing process with pid 3446325 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3446325 00:21:39.770 04:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3446325 00:21:39.770 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.770 [2024-05-15 04:22:10.023889] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:39.770 [2024-05-15 04:22:10.024058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446325 ] 00:21:39.770 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.770 [2024-05-15 04:22:10.116032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.770 [2024-05-15 04:22:10.226457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.770 Running I/O for 15 seconds... 00:21:39.770 [2024-05-15 04:22:12.817712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.770 [2024-05-15 04:22:12.817754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.770 [2024-05-15 04:22:12.817783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.770 [2024-05-15 04:22:12.817799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.770 [2024-05-15 04:22:12.817816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.770 [2024-05-15 04:22:12.817831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.770 [2024-05-15 04:22:12.817846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.770 [2024-05-15 04:22:12.817860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.770 [2024-05-15 04:22:12.817875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.770 [2024-05-15 04:22:12.817889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.770 [2024-05-15 04:22:12.817905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.817919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.817942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.817958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.817973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.817986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.771 [2024-05-15 04:22:12.818645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.818983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.818998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.819010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.819024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.819037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.819051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.819064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.819078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.771 [2024-05-15 04:22:12.819091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.771 [2024-05-15 04:22:12.819105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.772 [2024-05-15 04:22:12.819434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.819974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.819987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.772 [2024-05-15 04:22:12.820300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.772 [2024-05-15 04:22:12.820315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.820328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.820356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.820383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.820411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.820439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.820467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.820495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.820983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.820999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.821029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.773 [2024-05-15 04:22:12.821058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.773 [2024-05-15 04:22:12.821529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.773 [2024-05-15 04:22:12.821544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfa5a0 is same with the state(5) to be set 00:21:39.773 [2024-05-15 04:22:12.821567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:39.773 [2024-05-15 04:22:12.821579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:39.773 [2024-05-15 04:22:12.821592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:21:39.774 [2024-05-15 04:22:12.821605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:12.821665] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdfa5a0 was disconnected and freed. reset controller. 00:21:39.774 [2024-05-15 04:22:12.821690] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:39.774 [2024-05-15 04:22:12.821738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:12.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:12.821773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:12.821786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:12.821800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:12.821813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:12.821827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:12.821841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:12.821855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.774 [2024-05-15 04:22:12.825140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.774 [2024-05-15 04:22:12.825179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddb2f0 (9): Bad file descriptor 00:21:39.774 [2024-05-15 04:22:12.858605] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:39.774 [2024-05-15 04:22:16.504042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:16.504086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:16.504124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:16.504151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.774 [2024-05-15 04:22:16.504178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2f0 is same with the state(5) to be set 00:21:39.774 [2024-05-15 04:22:16.504283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.774 [2024-05-15 04:22:16.504315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.504978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.504992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.505010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.505025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.505039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.505053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.505066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.505081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.505094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.774 [2024-05-15 04:22:16.505108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.774 [2024-05-15 04:22:16.505121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.775 [2024-05-15 04:22:16.505594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.775 [2024-05-15 04:22:16.505621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.775 [2024-05-15 04:22:16.505648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.775 [2024-05-15 04:22:16.505676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.775 [2024-05-15 04:22:16.505703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.775 [2024-05-15 04:22:16.505734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.775 [2024-05-15 04:22:16.505762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.505973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.505986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.506001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.506014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.506029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.506042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.506057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.506069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.506084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.506098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.506116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.775 [2024-05-15 04:22:16.506130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.775 [2024-05-15 04:22:16.506146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.506979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.506994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.776 [2024-05-15 04:22:16.507403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.776 [2024-05-15 04:22:16.507416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:16.507897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.507973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.507987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.508002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.508016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.508034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.508050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.508065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.508078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.508094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.508107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.508122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.777 [2024-05-15 04:22:16.508136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.508164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:39.777 [2024-05-15 04:22:16.508179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:39.777 [2024-05-15 04:22:16.508192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:21:39.777 [2024-05-15 04:22:16.508204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:16.508289] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa4830 was disconnected and freed. reset controller. 00:21:39.777 [2024-05-15 04:22:16.508307] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:39.777 [2024-05-15 04:22:16.508320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.777 [2024-05-15 04:22:16.511619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.777 [2024-05-15 04:22:16.511657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddb2f0 (9): Bad file descriptor 00:21:39.777 [2024-05-15 04:22:16.585291] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:39.777 [2024-05-15 04:22:21.084712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.084757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.084784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.084799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.084815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.084828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.084843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.084856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.084870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.084888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.084903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.084938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.084956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.084980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.084994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.085007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.085022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.085035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.085049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.085062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.085076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.085089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.085104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.085117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.777 [2024-05-15 04:22:21.085131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.777 [2024-05-15 04:22:21.085144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.778 [2024-05-15 04:22:21.085692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.085975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.085989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.778 [2024-05-15 04:22:21.086341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.778 [2024-05-15 04:22:21.086354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.086857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.086964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.086981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.087000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.087030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.087060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.087088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.779 [2024-05-15 04:22:21.087116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.779 [2024-05-15 04:22:21.087377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.779 [2024-05-15 04:22:21.087408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.087974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.087996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.780 [2024-05-15 04:22:21.088575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:39.780 [2024-05-15 04:22:21.088618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:39.780 [2024-05-15 04:22:21.088631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70976 len:8 PRP1 0x0 PRP2 0x0 00:21:39.780 [2024-05-15 04:22:21.088645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.780 [2024-05-15 04:22:21.088713] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa4830 was disconnected and freed. reset controller. 00:21:39.780 [2024-05-15 04:22:21.088732] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:39.780 [2024-05-15 04:22:21.088764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.781 [2024-05-15 04:22:21.088798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.781 [2024-05-15 04:22:21.088814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.781 [2024-05-15 04:22:21.088832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.781 [2024-05-15 04:22:21.088847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.781 [2024-05-15 04:22:21.088860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.781 [2024-05-15 04:22:21.088874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.781 [2024-05-15 04:22:21.088887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.781 [2024-05-15 04:22:21.088901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.781 [2024-05-15 04:22:21.088962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddb2f0 (9): Bad file descriptor 00:21:39.781 [2024-05-15 04:22:21.092267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.781 [2024-05-15 04:22:21.257781] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:39.781 00:21:39.781 Latency(us) 00:21:39.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.781 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:39.781 Verification LBA range: start 0x0 length 0x4000 00:21:39.781 NVMe0n1 : 15.01 9106.43 35.57 689.52 0.00 13038.45 831.34 17185.00 00:21:39.781 =================================================================================================================== 00:21:39.781 Total : 9106.43 35.57 689.52 0.00 13038.45 831.34 17185.00 00:21:39.781 Received shutdown signal, test time was about 15.000000 seconds 00:21:39.781 00:21:39.781 Latency(us) 00:21:39.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.781 =================================================================================================================== 00:21:39.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3448315 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3448315 /var/tmp/bdevperf.sock 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3448315 ']' 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:39.781 [2024-05-15 04:22:27.568405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:39.781 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:40.039 [2024-05-15 04:22:27.813074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:40.039 04:22:27 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.296 NVMe0n1 00:21:40.296 04:22:28 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.862 00:21:40.862 04:22:28 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.120 00:21:41.120 04:22:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.120 04:22:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:41.378 04:22:29 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.635 04:22:29 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:44.912 04:22:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.912 04:22:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:44.912 04:22:32 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3448984 00:21:44.912 04:22:32 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.912 04:22:32 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3448984 00:21:46.305 0 00:21:46.305 04:22:33 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:46.305 [2024-05-15 04:22:27.067165] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:46.305 [2024-05-15 04:22:27.067282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448315 ] 00:21:46.305 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.305 [2024-05-15 04:22:27.137379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.305 [2024-05-15 04:22:27.242159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.305 [2024-05-15 04:22:29.535681] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:46.305 [2024-05-15 04:22:29.535778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.305 [2024-05-15 04:22:29.535801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.305 [2024-05-15 04:22:29.535818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.305 [2024-05-15 04:22:29.535831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.305 [2024-05-15 04:22:29.535978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.305 [2024-05-15 04:22:29.536000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.305 [2024-05-15 04:22:29.536016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.305 [2024-05-15 04:22:29.536030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.305 [2024-05-15 04:22:29.536044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.305 [2024-05-15 04:22:29.536086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.305 [2024-05-15 04:22:29.536118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c02f0 (9): Bad file descriptor 00:21:46.305 [2024-05-15 04:22:29.669189] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.305 Running I/O for 1 seconds... 00:21:46.305 00:21:46.305 Latency(us) 00:21:46.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.305 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:46.305 Verification LBA range: start 0x0 length 0x4000 00:21:46.305 NVMe0n1 : 1.01 8790.81 34.34 0.00 0.00 14485.22 983.04 12184.84 00:21:46.305 =================================================================================================================== 00:21:46.305 Total : 8790.81 34.34 0.00 0.00 14485.22 983.04 12184.84 00:21:46.305 04:22:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:46.305 04:22:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:46.305 04:22:34 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:46.562 04:22:34 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:46.562 04:22:34 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:46.819 04:22:34 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.077 04:22:34 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:50.357 04:22:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.357 04:22:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3448315 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3448315 ']' 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3448315 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3448315 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3448315' 00:21:50.357 killing process with pid 3448315 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3448315 00:21:50.357 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3448315 00:21:50.614 04:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:50.614 04:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.872 04:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:50.872 04:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:50.872 04:22:38 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:50.872 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.872 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.873 rmmod nvme_tcp 00:21:50.873 rmmod nvme_fabrics 00:21:50.873 rmmod nvme_keyring 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3446036 ']' 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3446036 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3446036 ']' 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3446036 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3446036 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3446036' 00:21:50.873 killing process with pid 3446036 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3446036 00:21:50.873 [2024-05-15 04:22:38.779199] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:50.873 04:22:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3446036 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.132 04:22:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.665 04:22:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.666 00:21:53.666 real 0m35.860s 00:21:53.666 user 2m5.029s 00:21:53.666 sys 0m6.213s 00:21:53.666 04:22:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:53.666 04:22:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.666 ************************************ 00:21:53.666 END TEST nvmf_failover 00:21:53.666 ************************************ 00:21:53.666 04:22:41 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:53.666 04:22:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:53.666 04:22:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:53.666 04:22:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.666 ************************************ 00:21:53.666 START TEST nvmf_host_discovery 00:21:53.666 ************************************ 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:53.666 * Looking for test storage... 00:21:53.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.666 04:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:21:56.198 00:21:56.198 --- 10.0.0.2 ping statistics --- 00:21:56.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.198 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:21:56.198 00:21:56.198 --- 10.0.0.1 ping statistics --- 00:21:56.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.198 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3451991 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3451991 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3451991 ']' 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:56.198 04:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.198 [2024-05-15 04:22:43.824738] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:56.198 [2024-05-15 04:22:43.824828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.198 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.198 [2024-05-15 04:22:43.904402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.198 [2024-05-15 04:22:44.020680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.198 [2024-05-15 04:22:44.020747] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.198 [2024-05-15 04:22:44.020763] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.198 [2024-05-15 04:22:44.020776] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.198 [2024-05-15 04:22:44.020789] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.198 [2024-05-15 04:22:44.020831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.131 [2024-05-15 04:22:44.833070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.131 [2024-05-15 04:22:44.841031] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:57.131 [2024-05-15 04:22:44.841336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.131 null0 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.131 null1 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3452147 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3452147 /tmp/host.sock 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3452147 ']' 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:57.131 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:57.131 04:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.131 [2024-05-15 04:22:44.914133] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:21:57.131 [2024-05-15 04:22:44.914223] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452147 ] 00:21:57.131 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.131 [2024-05-15 04:22:45.002684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.131 [2024-05-15 04:22:45.114032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.066 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:58.066 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:21:58.066 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.067 04:22:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.067 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.326 [2024-05-15 04:22:46.208988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.326 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:21:58.584 04:22:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:21:59.151 [2024-05-15 04:22:46.974143] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:59.151 [2024-05-15 04:22:46.974185] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:59.151 [2024-05-15 04:22:46.974226] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.151 [2024-05-15 04:22:47.101621] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:59.409 [2024-05-15 04:22:47.203474] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:59.409 [2024-05-15 04:22:47.203508] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:59.409 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.667 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:59.668 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.926 [2024-05-15 04:22:47.805586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:59.926 [2024-05-15 04:22:47.806083] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:59.926 [2024-05-15 04:22:47.806119] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:59.926 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.927 [2024-05-15 04:22:47.934954] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:59.927 04:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:22:00.185 [2024-05-15 04:22:48.036702] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:00.185 [2024-05-15 04:22:48.036728] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:00.185 [2024-05-15 04:22:48.036739] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.119 04:22:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.119 [2024-05-15 04:22:49.037898] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:01.119 [2024-05-15 04:22:49.037968] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.119 [2024-05-15 04:22:49.042124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.119 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:01.119 [2024-05-15 04:22:49.042263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.120 [2024-05-15 04:22:49.042288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.120 [2024-05-15 04:22:49.042311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.120 [2024-05-15 04:22:49.042325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.120 [2024-05-15 04:22:49.042339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.120 [2024-05-15 04:22:49.042353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.120 [2024-05-15 04:22:49.042368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.120 [2024-05-15 04:22:49.042383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:01.120 [2024-05-15 04:22:49.052111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.120 [2024-05-15 04:22:49.062160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.120 [2024-05-15 04:22:49.062536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.062755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.062782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x758900 with addr=10.0.0.2, port=4420 00:22:01.120 [2024-05-15 04:22:49.062806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.120 [2024-05-15 04:22:49.062832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.120 [2024-05-15 04:22:49.062869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.120 [2024-05-15 04:22:49.062888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.120 [2024-05-15 04:22:49.062907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.120 [2024-05-15 04:22:49.062927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.120 [2024-05-15 04:22:49.072241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.120 [2024-05-15 04:22:49.072536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.072766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.072792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x758900 with addr=10.0.0.2, port=4420 00:22:01.120 [2024-05-15 04:22:49.072808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.120 [2024-05-15 04:22:49.072830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.120 [2024-05-15 04:22:49.072852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.120 [2024-05-15 04:22:49.072866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.120 [2024-05-15 04:22:49.072880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.120 [2024-05-15 04:22:49.072914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.120 [2024-05-15 04:22:49.082330] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.120 [2024-05-15 04:22:49.082545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.083735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.083766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x758900 with addr=10.0.0.2, port=4420 00:22:01.120 [2024-05-15 04:22:49.083781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.120 [2024-05-15 04:22:49.083804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.120 [2024-05-15 04:22:49.083903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.120 [2024-05-15 04:22:49.083947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.120 [2024-05-15 04:22:49.083963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.120 [2024-05-15 04:22:49.083993] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.120 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.120 [2024-05-15 04:22:49.092594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.120 [2024-05-15 04:22:49.092853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.093061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.093089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x758900 with addr=10.0.0.2, port=4420 00:22:01.120 [2024-05-15 04:22:49.093106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.120 [2024-05-15 04:22:49.093129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.120 [2024-05-15 04:22:49.093151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.120 [2024-05-15 04:22:49.093166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.120 [2024-05-15 04:22:49.093179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.120 [2024-05-15 04:22:49.093216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.120 [2024-05-15 04:22:49.102670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.120 [2024-05-15 04:22:49.102944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.103133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.103159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x758900 with addr=10.0.0.2, port=4420 00:22:01.120 [2024-05-15 04:22:49.103176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.120 [2024-05-15 04:22:49.103198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.120 [2024-05-15 04:22:49.103219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.120 [2024-05-15 04:22:49.103233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.120 [2024-05-15 04:22:49.103247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.120 [2024-05-15 04:22:49.103281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.120 [2024-05-15 04:22:49.112743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.120 [2024-05-15 04:22:49.112996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.113177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.120 [2024-05-15 04:22:49.113203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x758900 with addr=10.0.0.2, port=4420 00:22:01.120 [2024-05-15 04:22:49.113219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.120 [2024-05-15 04:22:49.113242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.120 [2024-05-15 04:22:49.113270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.120 [2024-05-15 04:22:49.113286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.120 [2024-05-15 04:22:49.113300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.121 [2024-05-15 04:22:49.113319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.121 [2024-05-15 04:22:49.122815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.121 [2024-05-15 04:22:49.123062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.121 [2024-05-15 04:22:49.123245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.121 [2024-05-15 04:22:49.123285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x758900 with addr=10.0.0.2, port=4420 00:22:01.121 [2024-05-15 04:22:49.123301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x758900 is same with the state(5) to be set 00:22:01.121 [2024-05-15 04:22:49.123323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758900 (9): Bad file descriptor 00:22:01.121 [2024-05-15 04:22:49.123344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:01.121 [2024-05-15 04:22:49.123358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:01.121 [2024-05-15 04:22:49.123371] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:01.121 [2024-05-15 04:22:49.123390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.121 [2024-05-15 04:22:49.126710] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:01.121 [2024-05-15 04:22:49.126736] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:01.121 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.379 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.380 04:22:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 [2024-05-15 04:22:50.414092] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:02.812 [2024-05-15 04:22:50.414123] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:02.812 [2024-05-15 04:22:50.414144] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:02.812 [2024-05-15 04:22:50.500461] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:02.812 [2024-05-15 04:22:50.565721] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:02.812 [2024-05-15 04:22:50.565773] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 request: 00:22:02.812 { 00:22:02.812 "name": "nvme", 00:22:02.812 "trtype": "tcp", 00:22:02.812 "traddr": "10.0.0.2", 00:22:02.812 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:02.812 "adrfam": "ipv4", 00:22:02.812 "trsvcid": "8009", 00:22:02.812 "wait_for_attach": true, 00:22:02.812 "method": "bdev_nvme_start_discovery", 00:22:02.812 "req_id": 1 00:22:02.812 } 00:22:02.812 Got JSON-RPC error response 00:22:02.812 response: 00:22:02.812 { 00:22:02.812 "code": -17, 00:22:02.812 "message": "File exists" 00:22:02.812 } 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 request: 00:22:02.812 { 00:22:02.812 "name": "nvme_second", 00:22:02.812 "trtype": "tcp", 00:22:02.812 "traddr": "10.0.0.2", 00:22:02.812 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:02.812 "adrfam": "ipv4", 00:22:02.812 "trsvcid": "8009", 00:22:02.812 "wait_for_attach": true, 00:22:02.812 "method": "bdev_nvme_start_discovery", 00:22:02.812 "req_id": 1 00:22:02.812 } 00:22:02.812 Got JSON-RPC error response 00:22:02.812 response: 00:22:02.812 { 00:22:02.812 "code": -17, 00:22:02.812 "message": "File exists" 00:22:02.812 } 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.813 04:22:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.747 [2024-05-15 04:22:51.761345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.747 [2024-05-15 04:22:51.761603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.747 [2024-05-15 04:22:51.761634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78aad0 with addr=10.0.0.2, port=8010 00:22:03.747 [2024-05-15 04:22:51.761665] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:03.747 [2024-05-15 04:22:51.761682] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:03.747 [2024-05-15 04:22:51.761697] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:05.121 [2024-05-15 04:22:52.763649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.121 [2024-05-15 04:22:52.763928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.121 [2024-05-15 04:22:52.763982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x756710 with addr=10.0.0.2, port=8010 00:22:05.121 [2024-05-15 04:22:52.764006] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:05.121 [2024-05-15 04:22:52.764020] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:05.121 [2024-05-15 04:22:52.764034] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:06.054 [2024-05-15 04:22:53.765827] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:06.054 request: 00:22:06.054 { 00:22:06.054 "name": "nvme_second", 00:22:06.054 "trtype": "tcp", 00:22:06.054 "traddr": "10.0.0.2", 00:22:06.054 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:06.054 "adrfam": "ipv4", 00:22:06.054 "trsvcid": "8010", 00:22:06.054 "attach_timeout_ms": 3000, 00:22:06.054 "method": "bdev_nvme_start_discovery", 00:22:06.054 "req_id": 1 00:22:06.054 } 00:22:06.054 Got JSON-RPC error response 00:22:06.054 response: 00:22:06.054 { 00:22:06.054 "code": -110, 00:22:06.054 "message": "Connection timed out" 00:22:06.054 } 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3452147 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.054 rmmod nvme_tcp 00:22:06.054 rmmod nvme_fabrics 00:22:06.054 rmmod nvme_keyring 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:06.054 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3451991 ']' 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3451991 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3451991 ']' 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3451991 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3451991 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3451991' 00:22:06.055 killing process with pid 3451991 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3451991 00:22:06.055 [2024-05-15 04:22:53.887100] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:06.055 04:22:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3451991 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.313 04:22:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.222 04:22:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.222 00:22:08.222 real 0m15.019s 00:22:08.222 user 0m21.818s 00:22:08.222 sys 0m3.170s 00:22:08.222 04:22:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:08.222 04:22:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.222 ************************************ 00:22:08.222 END TEST nvmf_host_discovery 00:22:08.222 ************************************ 00:22:08.222 04:22:56 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:08.222 04:22:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:08.222 04:22:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:08.222 04:22:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.481 ************************************ 00:22:08.481 START TEST nvmf_host_multipath_status 00:22:08.481 ************************************ 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:08.481 * Looking for test storage... 00:22:08.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.481 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.482 04:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:11.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:11.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:11.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:11.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.014 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:22:11.015 00:22:11.015 --- 10.0.0.2 ping statistics --- 00:22:11.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.015 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:22:11.015 00:22:11.015 --- 10.0.0.1 ping statistics --- 00:22:11.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.015 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.015 04:22:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3455606 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3455606 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3455606 ']' 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:11.015 04:22:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:11.274 [2024-05-15 04:22:59.056502] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:11.274 [2024-05-15 04:22:59.056593] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.274 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.274 [2024-05-15 04:22:59.136421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:11.274 [2024-05-15 04:22:59.241873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.274 [2024-05-15 04:22:59.241924] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.274 [2024-05-15 04:22:59.241971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.274 [2024-05-15 04:22:59.241983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.274 [2024-05-15 04:22:59.241993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.274 [2024-05-15 04:22:59.242062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.274 [2024-05-15 04:22:59.242068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3455606 00:22:12.208 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:12.466 [2024-05-15 04:23:00.275893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.466 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:12.724 Malloc0 00:22:12.724 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:12.982 04:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.240 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.497 [2024-05-15 04:23:01.315168] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:13.497 [2024-05-15 04:23:01.315447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.497 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:13.755 [2024-05-15 04:23:01.552020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3455897 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3455897 /var/tmp/bdevperf.sock 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3455897 ']' 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:13.755 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:14.014 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.014 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:22:14.014 04:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:14.272 04:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:14.838 Nvme0n1 00:22:14.838 04:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:15.404 Nvme0n1 00:22:15.404 04:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:15.404 04:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:17.304 04:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:17.304 04:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:17.562 04:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:17.819 04:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:18.783 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:18.783 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:18.783 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.783 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.041 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.041 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:19.041 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.041 04:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.299 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.299 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.299 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.299 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:19.558 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.558 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:19.558 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.558 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:19.816 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.816 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:19.816 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.816 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.073 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.073 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.073 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.073 04:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.330 04:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.330 04:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:20.330 04:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:20.588 04:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:20.846 04:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:21.778 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:21.779 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:21.779 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.779 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.036 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.036 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:22.036 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.036 04:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:22.294 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.294 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:22.294 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.294 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:22.551 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.551 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:22.551 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.551 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:22.810 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.810 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:22.810 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.810 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.068 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.068 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:23.068 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.068 04:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:23.326 04:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.326 04:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:23.326 04:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:23.584 04:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:23.842 04:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:24.775 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:24.775 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:24.775 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.775 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.034 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.034 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:25.034 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.034 04:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.292 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.292 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.292 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.292 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.550 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.550 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.550 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.550 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:25.808 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.808 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:25.808 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.808 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.066 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.066 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.066 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.066 04:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.324 04:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.324 04:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:26.324 04:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:26.582 04:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:26.841 04:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:27.774 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:27.774 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:27.774 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.774 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.032 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.032 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:28.032 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.032 04:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.292 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.292 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.292 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.292 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.551 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.551 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.551 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.551 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.809 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.809 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:28.809 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.809 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.068 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.068 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:29.068 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.068 04:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.328 04:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:29.328 04:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:29.328 04:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:29.586 04:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:29.843 04:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:30.776 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:30.776 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:30.777 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.777 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.033 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.033 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.033 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.033 04:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.291 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.291 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.291 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.291 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.576 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.576 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.576 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.576 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.839 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.839 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:31.839 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.839 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.103 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.103 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:32.103 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.103 04:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.361 04:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.361 04:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:32.361 04:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:32.361 04:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:32.619 04:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.992 04:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.250 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.250 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.250 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.250 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.508 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.508 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.508 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.508 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.765 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.765 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:34.765 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.765 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:35.023 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.023 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:35.023 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.023 04:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.280 04:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.280 04:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:35.538 04:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:35.538 04:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:35.795 04:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:36.052 04:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:36.985 04:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:36.985 04:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:36.985 04:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.985 04:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.243 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.243 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:37.244 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.244 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.503 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.503 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.503 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.503 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.761 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.761 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.761 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.761 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:38.019 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.019 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:38.019 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.019 04:23:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.277 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.277 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.277 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.277 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.535 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.535 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:38.535 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:38.793 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:39.052 04:23:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:39.986 04:23:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:39.986 04:23:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:39.986 04:23:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.986 04:23:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.244 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.244 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:40.244 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.244 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:40.503 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.503 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:40.503 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.503 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:40.761 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.761 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:40.761 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.761 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.019 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.019 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.019 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.019 04:23:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.277 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.277 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:41.277 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.277 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:41.535 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.535 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:41.535 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:41.793 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:42.052 04:23:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:42.987 04:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:42.987 04:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:42.987 04:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.987 04:23:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:43.245 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.245 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:43.245 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.245 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.504 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.504 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.504 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.504 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:43.762 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.762 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:43.762 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.762 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.021 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.021 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:44.021 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.021 04:23:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.279 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.279 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:44.279 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.279 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.537 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.537 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:44.537 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:44.801 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:45.098 04:23:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:46.032 04:23:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:46.032 04:23:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:46.032 04:23:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.032 04:23:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:46.290 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.290 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:46.290 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.290 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.548 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:46.548 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.548 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.548 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:46.806 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.806 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:46.806 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.806 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:47.064 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.064 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:47.064 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.064 04:23:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:47.321 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.321 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:47.321 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.321 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3455897 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3455897 ']' 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3455897 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3455897 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3455897' 00:22:47.577 killing process with pid 3455897 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3455897 00:22:47.577 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3455897 00:22:47.837 Connection closed with partial response: 00:22:47.837 00:22:47.837 00:22:47.837 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3455897 00:22:47.837 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.837 [2024-05-15 04:23:01.608484] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:47.837 [2024-05-15 04:23:01.608572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455897 ] 00:22:47.837 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.837 [2024-05-15 04:23:01.683241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.837 [2024-05-15 04:23:01.792619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.837 Running I/O for 90 seconds... 00:22:47.837 [2024-05-15 04:23:17.378363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.837 [2024-05-15 04:23:17.378437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:47.837 [2024-05-15 04:23:17.378505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.837 [2024-05-15 04:23:17.378527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:47.837 [2024-05-15 04:23:17.378552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.837 [2024-05-15 04:23:17.378569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:47.837 [2024-05-15 04:23:17.378592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.837 [2024-05-15 04:23:17.378609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:47.837 [2024-05-15 04:23:17.378632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.837 [2024-05-15 04:23:17.378649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:47.837 [2024-05-15 04:23:17.378671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.837 [2024-05-15 04:23:17.378688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.378710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.838 [2024-05-15 04:23:17.378727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.378750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.838 [2024-05-15 04:23:17.378766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.378788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.838 [2024-05-15 04:23:17.378805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.378827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.378844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.378867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.378940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.378973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.378999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.379966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.379983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.380490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.380510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.381046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.381071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.381101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.381121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.381148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.381165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.381196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.381213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.381244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.381261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.381288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.381305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:47.838 [2024-05-15 04:23:17.381332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.838 [2024-05-15 04:23:17.381350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.381392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.381435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.381476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.381956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.382942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.382977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.382996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.383086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.383130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.383176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.383237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.839 [2024-05-15 04:23:17.383281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:47.839 [2024-05-15 04:23:17.383841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.839 [2024-05-15 04:23:17.383857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.383885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.383920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.383958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.383984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:17.384733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:17.384749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.938958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.938975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.840 [2024-05-15 04:23:32.940548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.840 [2024-05-15 04:23:32.940927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.940963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.940980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.941002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.941022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.941045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.840 [2024-05-15 04:23:32.941061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:47.840 [2024-05-15 04:23:32.941083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.941387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.941403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.942951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.942969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:47.841 [2024-05-15 04:23:32.943550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.841 [2024-05-15 04:23:32.943566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:47.841 Received shutdown signal, test time was about 32.151438 seconds 00:22:47.841 00:22:47.841 Latency(us) 00:22:47.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.841 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.841 Verification LBA range: start 0x0 length 0x4000 00:22:47.841 Nvme0n1 : 32.15 7950.14 31.06 0.00 0.00 16073.84 549.17 4026531.84 00:22:47.841 =================================================================================================================== 00:22:47.841 Total : 7950.14 31.06 0.00 0.00 16073.84 549.17 4026531.84 00:22:47.841 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:48.099 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:48.099 04:23:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:48.099 rmmod nvme_tcp 00:22:48.099 rmmod nvme_fabrics 00:22:48.099 rmmod nvme_keyring 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3455606 ']' 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3455606 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3455606 ']' 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3455606 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3455606 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3455606' 00:22:48.099 killing process with pid 3455606 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3455606 00:22:48.099 [2024-05-15 04:23:36.096169] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:48.099 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3455606 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.664 04:23:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.563 04:23:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.563 00:22:50.563 real 0m42.198s 00:22:50.563 user 2m5.033s 00:22:50.563 sys 0m10.845s 00:22:50.563 04:23:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:50.563 04:23:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:50.563 ************************************ 00:22:50.563 END TEST nvmf_host_multipath_status 00:22:50.563 ************************************ 00:22:50.563 04:23:38 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:50.563 04:23:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:50.563 04:23:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:50.563 04:23:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.563 ************************************ 00:22:50.563 START TEST nvmf_discovery_remove_ifc 00:22:50.563 ************************************ 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:50.563 * Looking for test storage... 00:22:50.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:50.563 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.564 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.822 04:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.349 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:53.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:53.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:53.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:53.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:53.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:22:53.350 00:22:53.350 --- 10.0.0.2 ping statistics --- 00:22:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.350 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:53.350 00:22:53.350 --- 10.0.0.1 ping statistics --- 00:22:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.350 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:53.350 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3462504 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3462504 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3462504 ']' 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.351 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.351 [2024-05-15 04:23:41.280286] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:53.351 [2024-05-15 04:23:41.280361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.351 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.351 [2024-05-15 04:23:41.355554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.609 [2024-05-15 04:23:41.461423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.609 [2024-05-15 04:23:41.461484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.609 [2024-05-15 04:23:41.461508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.609 [2024-05-15 04:23:41.461518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.609 [2024-05-15 04:23:41.461528] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.609 [2024-05-15 04:23:41.461552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.609 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.609 [2024-05-15 04:23:41.606461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.609 [2024-05-15 04:23:41.614414] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:53.609 [2024-05-15 04:23:41.614686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:53.867 null0 00:22:53.867 [2024-05-15 04:23:41.646601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3462532 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3462532 /tmp/host.sock 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3462532 ']' 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:53.867 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.867 04:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.867 [2024-05-15 04:23:41.709434] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:22:53.867 [2024-05-15 04:23:41.709499] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462532 ] 00:22:53.867 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.867 [2024-05-15 04:23:41.780481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.124 [2024-05-15 04:23:41.898316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.690 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.947 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.947 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:54.947 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.947 04:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.878 [2024-05-15 04:23:43.808166] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:55.878 [2024-05-15 04:23:43.808211] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:55.878 [2024-05-15 04:23:43.808253] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:56.136 [2024-05-15 04:23:43.894538] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:56.136 [2024-05-15 04:23:43.997712] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:56.136 [2024-05-15 04:23:43.997778] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:56.136 [2024-05-15 04:23:43.997822] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:56.136 [2024-05-15 04:23:43.997848] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:56.136 [2024-05-15 04:23:43.997886] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:56.136 04:23:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.136 04:23:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:56.136 04:23:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.136 [2024-05-15 04:23:44.004953] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd0b0d0 was disconnected and freed. delete nvme_qpair. 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:56.136 04:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:57.508 04:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:58.441 04:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:59.373 04:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:00.338 04:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:01.711 04:23:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.711 [2024-05-15 04:23:49.439102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:01.711 [2024-05-15 04:23:49.439169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.711 [2024-05-15 04:23:49.439190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.711 [2024-05-15 04:23:49.439210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.711 [2024-05-15 04:23:49.439238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.711 [2024-05-15 04:23:49.439255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.711 [2024-05-15 04:23:49.439270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.711 [2024-05-15 04:23:49.439285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.711 [2024-05-15 04:23:49.439300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.711 [2024-05-15 04:23:49.439316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.711 [2024-05-15 04:23:49.439330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.711 [2024-05-15 04:23:49.439346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd2440 is same with the state(5) to be set 00:23:01.711 [2024-05-15 04:23:49.449120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd2440 (9): Bad file descriptor 00:23:01.711 [2024-05-15 04:23:49.459168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.644 04:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.644 04:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.644 04:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.644 04:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.644 04:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.644 04:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.644 04:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.644 [2024-05-15 04:23:50.477002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:03.577 [2024-05-15 04:23:51.501010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:03.577 [2024-05-15 04:23:51.501099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd2440 with addr=10.0.0.2, port=4420 00:23:03.577 [2024-05-15 04:23:51.501129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd2440 is same with the state(5) to be set 00:23:03.577 [2024-05-15 04:23:51.501616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd2440 (9): Bad file descriptor 00:23:03.577 [2024-05-15 04:23:51.501664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.577 [2024-05-15 04:23:51.501703] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:03.577 [2024-05-15 04:23:51.501746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.577 [2024-05-15 04:23:51.501772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.577 [2024-05-15 04:23:51.501794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.577 [2024-05-15 04:23:51.501809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.577 [2024-05-15 04:23:51.501824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.577 [2024-05-15 04:23:51.501839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.577 [2024-05-15 04:23:51.501854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.577 [2024-05-15 04:23:51.501868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.577 [2024-05-15 04:23:51.501883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.577 [2024-05-15 04:23:51.501898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.577 [2024-05-15 04:23:51.501913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:03.577 [2024-05-15 04:23:51.502166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd18d0 (9): Bad file descriptor 00:23:03.577 [2024-05-15 04:23:51.503188] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:03.577 [2024-05-15 04:23:51.503210] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:03.577 04:23:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.577 04:23:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:03.577 04:23:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.510 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:04.510 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.510 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:04.511 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.511 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.511 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:04.511 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:04.769 04:23:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:05.700 [2024-05-15 04:23:53.563085] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:05.700 [2024-05-15 04:23:53.563121] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:05.700 [2024-05-15 04:23:53.563143] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:05.700 04:23:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:05.700 [2024-05-15 04:23:53.689618] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:05.957 [2024-05-15 04:23:53.872220] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:05.957 [2024-05-15 04:23:53.872266] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:05.957 [2024-05-15 04:23:53.872319] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:05.957 [2024-05-15 04:23:53.872343] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:05.957 [2024-05-15 04:23:53.872358] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:05.957 [2024-05-15 04:23:53.881123] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd156e0 was disconnected and freed. delete nvme_qpair. 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3462532 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3462532 ']' 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3462532 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3462532 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3462532' 00:23:06.889 killing process with pid 3462532 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3462532 00:23:06.889 04:23:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3462532 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.147 rmmod nvme_tcp 00:23:07.147 rmmod nvme_fabrics 00:23:07.147 rmmod nvme_keyring 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3462504 ']' 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3462504 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3462504 ']' 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3462504 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3462504 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3462504' 00:23:07.147 killing process with pid 3462504 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3462504 00:23:07.147 [2024-05-15 04:23:55.095640] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:07.147 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3462504 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.405 04:23:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.940 04:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.940 00:23:09.940 real 0m18.900s 00:23:09.940 user 0m26.317s 00:23:09.940 sys 0m3.369s 00:23:09.940 04:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:09.940 04:23:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.940 ************************************ 00:23:09.940 END TEST nvmf_discovery_remove_ifc 00:23:09.940 ************************************ 00:23:09.940 04:23:57 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:09.940 04:23:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:09.940 04:23:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:09.940 04:23:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.940 ************************************ 00:23:09.940 START TEST nvmf_identify_kernel_target 00:23:09.940 ************************************ 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:09.940 * Looking for test storage... 00:23:09.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:09.940 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.941 04:23:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.470 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.470 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:12.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:12.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:12.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:12.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:12.471 04:23:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.471 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.471 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.471 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:12.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:23:12.472 00:23:12.472 --- 10.0.0.2 ping statistics --- 00:23:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.472 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:23:12.472 00:23:12.472 --- 10.0.0.1 ping statistics --- 00:23:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.472 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:12.472 04:24:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:13.406 Waiting for block devices as requested 00:23:13.406 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:13.406 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:13.406 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:13.406 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:13.664 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:13.664 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:13.664 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:13.664 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:13.922 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:13.922 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:13.922 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:14.179 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:14.179 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:14.179 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:14.179 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:14.436 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:14.436 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:14.436 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:14.436 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:14.436 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:14.436 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:23:14.436 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:14.436 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:14.437 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:14.437 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:14.437 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:14.695 No valid GPT data, bailing 00:23:14.695 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:14.696 00:23:14.696 Discovery Log Number of Records 2, Generation counter 2 00:23:14.696 =====Discovery Log Entry 0====== 00:23:14.696 trtype: tcp 00:23:14.696 adrfam: ipv4 00:23:14.696 subtype: current discovery subsystem 00:23:14.696 treq: not specified, sq flow control disable supported 00:23:14.696 portid: 1 00:23:14.696 trsvcid: 4420 00:23:14.696 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:14.696 traddr: 10.0.0.1 00:23:14.696 eflags: none 00:23:14.696 sectype: none 00:23:14.696 =====Discovery Log Entry 1====== 00:23:14.696 trtype: tcp 00:23:14.696 adrfam: ipv4 00:23:14.696 subtype: nvme subsystem 00:23:14.696 treq: not specified, sq flow control disable supported 00:23:14.696 portid: 1 00:23:14.696 trsvcid: 4420 00:23:14.696 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:14.696 traddr: 10.0.0.1 00:23:14.696 eflags: none 00:23:14.696 sectype: none 00:23:14.696 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:14.696 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:14.696 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.696 ===================================================== 00:23:14.696 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:14.696 ===================================================== 00:23:14.696 Controller Capabilities/Features 00:23:14.696 ================================ 00:23:14.696 Vendor ID: 0000 00:23:14.696 Subsystem Vendor ID: 0000 00:23:14.696 Serial Number: d226d06c6176ee700f41 00:23:14.696 Model Number: Linux 00:23:14.696 Firmware Version: 6.7.0-68 00:23:14.696 Recommended Arb Burst: 0 00:23:14.696 IEEE OUI Identifier: 00 00 00 00:23:14.696 Multi-path I/O 00:23:14.696 May have multiple subsystem ports: No 00:23:14.696 May have multiple controllers: No 00:23:14.696 Associated with SR-IOV VF: No 00:23:14.696 Max Data Transfer Size: Unlimited 00:23:14.696 Max Number of Namespaces: 0 00:23:14.696 Max Number of I/O Queues: 1024 00:23:14.696 NVMe Specification Version (VS): 1.3 00:23:14.696 NVMe Specification Version (Identify): 1.3 00:23:14.696 Maximum Queue Entries: 1024 00:23:14.696 Contiguous Queues Required: No 00:23:14.696 Arbitration Mechanisms Supported 00:23:14.696 Weighted Round Robin: Not Supported 00:23:14.696 Vendor Specific: Not Supported 00:23:14.696 Reset Timeout: 7500 ms 00:23:14.696 Doorbell Stride: 4 bytes 00:23:14.696 NVM Subsystem Reset: Not Supported 00:23:14.696 Command Sets Supported 00:23:14.696 NVM Command Set: Supported 00:23:14.696 Boot Partition: Not Supported 00:23:14.696 Memory Page Size Minimum: 4096 bytes 00:23:14.696 Memory Page Size Maximum: 4096 bytes 00:23:14.696 Persistent Memory Region: Not Supported 00:23:14.696 Optional Asynchronous Events Supported 00:23:14.696 Namespace Attribute Notices: Not Supported 00:23:14.696 Firmware Activation Notices: Not Supported 00:23:14.696 ANA Change Notices: Not Supported 00:23:14.696 PLE Aggregate Log Change Notices: Not Supported 00:23:14.696 LBA Status Info Alert Notices: Not Supported 00:23:14.696 EGE Aggregate Log Change Notices: Not Supported 00:23:14.696 Normal NVM Subsystem Shutdown event: Not Supported 00:23:14.696 Zone Descriptor Change Notices: Not Supported 00:23:14.696 Discovery Log Change Notices: Supported 00:23:14.696 Controller Attributes 00:23:14.696 128-bit Host Identifier: Not Supported 00:23:14.696 Non-Operational Permissive Mode: Not Supported 00:23:14.696 NVM Sets: Not Supported 00:23:14.696 Read Recovery Levels: Not Supported 00:23:14.696 Endurance Groups: Not Supported 00:23:14.696 Predictable Latency Mode: Not Supported 00:23:14.696 Traffic Based Keep ALive: Not Supported 00:23:14.696 Namespace Granularity: Not Supported 00:23:14.696 SQ Associations: Not Supported 00:23:14.696 UUID List: Not Supported 00:23:14.696 Multi-Domain Subsystem: Not Supported 00:23:14.696 Fixed Capacity Management: Not Supported 00:23:14.696 Variable Capacity Management: Not Supported 00:23:14.696 Delete Endurance Group: Not Supported 00:23:14.696 Delete NVM Set: Not Supported 00:23:14.696 Extended LBA Formats Supported: Not Supported 00:23:14.696 Flexible Data Placement Supported: Not Supported 00:23:14.696 00:23:14.696 Controller Memory Buffer Support 00:23:14.696 ================================ 00:23:14.696 Supported: No 00:23:14.696 00:23:14.696 Persistent Memory Region Support 00:23:14.696 ================================ 00:23:14.696 Supported: No 00:23:14.696 00:23:14.696 Admin Command Set Attributes 00:23:14.696 ============================ 00:23:14.696 Security Send/Receive: Not Supported 00:23:14.696 Format NVM: Not Supported 00:23:14.696 Firmware Activate/Download: Not Supported 00:23:14.696 Namespace Management: Not Supported 00:23:14.696 Device Self-Test: Not Supported 00:23:14.696 Directives: Not Supported 00:23:14.696 NVMe-MI: Not Supported 00:23:14.696 Virtualization Management: Not Supported 00:23:14.696 Doorbell Buffer Config: Not Supported 00:23:14.696 Get LBA Status Capability: Not Supported 00:23:14.696 Command & Feature Lockdown Capability: Not Supported 00:23:14.696 Abort Command Limit: 1 00:23:14.696 Async Event Request Limit: 1 00:23:14.696 Number of Firmware Slots: N/A 00:23:14.696 Firmware Slot 1 Read-Only: N/A 00:23:14.696 Firmware Activation Without Reset: N/A 00:23:14.696 Multiple Update Detection Support: N/A 00:23:14.696 Firmware Update Granularity: No Information Provided 00:23:14.696 Per-Namespace SMART Log: No 00:23:14.696 Asymmetric Namespace Access Log Page: Not Supported 00:23:14.696 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:14.696 Command Effects Log Page: Not Supported 00:23:14.696 Get Log Page Extended Data: Supported 00:23:14.696 Telemetry Log Pages: Not Supported 00:23:14.696 Persistent Event Log Pages: Not Supported 00:23:14.696 Supported Log Pages Log Page: May Support 00:23:14.696 Commands Supported & Effects Log Page: Not Supported 00:23:14.696 Feature Identifiers & Effects Log Page:May Support 00:23:14.696 NVMe-MI Commands & Effects Log Page: May Support 00:23:14.696 Data Area 4 for Telemetry Log: Not Supported 00:23:14.696 Error Log Page Entries Supported: 1 00:23:14.696 Keep Alive: Not Supported 00:23:14.696 00:23:14.696 NVM Command Set Attributes 00:23:14.696 ========================== 00:23:14.696 Submission Queue Entry Size 00:23:14.696 Max: 1 00:23:14.696 Min: 1 00:23:14.696 Completion Queue Entry Size 00:23:14.696 Max: 1 00:23:14.696 Min: 1 00:23:14.696 Number of Namespaces: 0 00:23:14.696 Compare Command: Not Supported 00:23:14.696 Write Uncorrectable Command: Not Supported 00:23:14.696 Dataset Management Command: Not Supported 00:23:14.696 Write Zeroes Command: Not Supported 00:23:14.696 Set Features Save Field: Not Supported 00:23:14.696 Reservations: Not Supported 00:23:14.696 Timestamp: Not Supported 00:23:14.696 Copy: Not Supported 00:23:14.696 Volatile Write Cache: Not Present 00:23:14.696 Atomic Write Unit (Normal): 1 00:23:14.696 Atomic Write Unit (PFail): 1 00:23:14.696 Atomic Compare & Write Unit: 1 00:23:14.696 Fused Compare & Write: Not Supported 00:23:14.696 Scatter-Gather List 00:23:14.696 SGL Command Set: Supported 00:23:14.696 SGL Keyed: Not Supported 00:23:14.697 SGL Bit Bucket Descriptor: Not Supported 00:23:14.697 SGL Metadata Pointer: Not Supported 00:23:14.697 Oversized SGL: Not Supported 00:23:14.697 SGL Metadata Address: Not Supported 00:23:14.697 SGL Offset: Supported 00:23:14.697 Transport SGL Data Block: Not Supported 00:23:14.697 Replay Protected Memory Block: Not Supported 00:23:14.697 00:23:14.697 Firmware Slot Information 00:23:14.697 ========================= 00:23:14.697 Active slot: 0 00:23:14.697 00:23:14.697 00:23:14.697 Error Log 00:23:14.697 ========= 00:23:14.697 00:23:14.697 Active Namespaces 00:23:14.697 ================= 00:23:14.697 Discovery Log Page 00:23:14.697 ================== 00:23:14.697 Generation Counter: 2 00:23:14.697 Number of Records: 2 00:23:14.697 Record Format: 0 00:23:14.697 00:23:14.697 Discovery Log Entry 0 00:23:14.697 ---------------------- 00:23:14.697 Transport Type: 3 (TCP) 00:23:14.697 Address Family: 1 (IPv4) 00:23:14.697 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:14.697 Entry Flags: 00:23:14.697 Duplicate Returned Information: 0 00:23:14.697 Explicit Persistent Connection Support for Discovery: 0 00:23:14.697 Transport Requirements: 00:23:14.697 Secure Channel: Not Specified 00:23:14.697 Port ID: 1 (0x0001) 00:23:14.697 Controller ID: 65535 (0xffff) 00:23:14.697 Admin Max SQ Size: 32 00:23:14.697 Transport Service Identifier: 4420 00:23:14.697 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:14.697 Transport Address: 10.0.0.1 00:23:14.697 Discovery Log Entry 1 00:23:14.697 ---------------------- 00:23:14.697 Transport Type: 3 (TCP) 00:23:14.697 Address Family: 1 (IPv4) 00:23:14.697 Subsystem Type: 2 (NVM Subsystem) 00:23:14.697 Entry Flags: 00:23:14.697 Duplicate Returned Information: 0 00:23:14.697 Explicit Persistent Connection Support for Discovery: 0 00:23:14.697 Transport Requirements: 00:23:14.697 Secure Channel: Not Specified 00:23:14.697 Port ID: 1 (0x0001) 00:23:14.697 Controller ID: 65535 (0xffff) 00:23:14.697 Admin Max SQ Size: 32 00:23:14.697 Transport Service Identifier: 4420 00:23:14.697 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:14.697 Transport Address: 10.0.0.1 00:23:14.697 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:14.697 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.697 get_feature(0x01) failed 00:23:14.697 get_feature(0x02) failed 00:23:14.697 get_feature(0x04) failed 00:23:14.697 ===================================================== 00:23:14.697 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:14.697 ===================================================== 00:23:14.697 Controller Capabilities/Features 00:23:14.697 ================================ 00:23:14.697 Vendor ID: 0000 00:23:14.697 Subsystem Vendor ID: 0000 00:23:14.697 Serial Number: 1493056dfd571cfe3761 00:23:14.697 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:14.697 Firmware Version: 6.7.0-68 00:23:14.697 Recommended Arb Burst: 6 00:23:14.697 IEEE OUI Identifier: 00 00 00 00:23:14.697 Multi-path I/O 00:23:14.697 May have multiple subsystem ports: Yes 00:23:14.697 May have multiple controllers: Yes 00:23:14.697 Associated with SR-IOV VF: No 00:23:14.697 Max Data Transfer Size: Unlimited 00:23:14.697 Max Number of Namespaces: 1024 00:23:14.697 Max Number of I/O Queues: 128 00:23:14.697 NVMe Specification Version (VS): 1.3 00:23:14.697 NVMe Specification Version (Identify): 1.3 00:23:14.697 Maximum Queue Entries: 1024 00:23:14.697 Contiguous Queues Required: No 00:23:14.697 Arbitration Mechanisms Supported 00:23:14.697 Weighted Round Robin: Not Supported 00:23:14.697 Vendor Specific: Not Supported 00:23:14.697 Reset Timeout: 7500 ms 00:23:14.697 Doorbell Stride: 4 bytes 00:23:14.697 NVM Subsystem Reset: Not Supported 00:23:14.697 Command Sets Supported 00:23:14.697 NVM Command Set: Supported 00:23:14.697 Boot Partition: Not Supported 00:23:14.697 Memory Page Size Minimum: 4096 bytes 00:23:14.697 Memory Page Size Maximum: 4096 bytes 00:23:14.697 Persistent Memory Region: Not Supported 00:23:14.697 Optional Asynchronous Events Supported 00:23:14.697 Namespace Attribute Notices: Supported 00:23:14.697 Firmware Activation Notices: Not Supported 00:23:14.697 ANA Change Notices: Supported 00:23:14.697 PLE Aggregate Log Change Notices: Not Supported 00:23:14.697 LBA Status Info Alert Notices: Not Supported 00:23:14.697 EGE Aggregate Log Change Notices: Not Supported 00:23:14.697 Normal NVM Subsystem Shutdown event: Not Supported 00:23:14.697 Zone Descriptor Change Notices: Not Supported 00:23:14.697 Discovery Log Change Notices: Not Supported 00:23:14.697 Controller Attributes 00:23:14.697 128-bit Host Identifier: Supported 00:23:14.697 Non-Operational Permissive Mode: Not Supported 00:23:14.697 NVM Sets: Not Supported 00:23:14.697 Read Recovery Levels: Not Supported 00:23:14.697 Endurance Groups: Not Supported 00:23:14.697 Predictable Latency Mode: Not Supported 00:23:14.697 Traffic Based Keep ALive: Supported 00:23:14.697 Namespace Granularity: Not Supported 00:23:14.697 SQ Associations: Not Supported 00:23:14.697 UUID List: Not Supported 00:23:14.697 Multi-Domain Subsystem: Not Supported 00:23:14.697 Fixed Capacity Management: Not Supported 00:23:14.697 Variable Capacity Management: Not Supported 00:23:14.697 Delete Endurance Group: Not Supported 00:23:14.697 Delete NVM Set: Not Supported 00:23:14.697 Extended LBA Formats Supported: Not Supported 00:23:14.697 Flexible Data Placement Supported: Not Supported 00:23:14.697 00:23:14.697 Controller Memory Buffer Support 00:23:14.697 ================================ 00:23:14.697 Supported: No 00:23:14.697 00:23:14.697 Persistent Memory Region Support 00:23:14.697 ================================ 00:23:14.697 Supported: No 00:23:14.697 00:23:14.697 Admin Command Set Attributes 00:23:14.697 ============================ 00:23:14.697 Security Send/Receive: Not Supported 00:23:14.697 Format NVM: Not Supported 00:23:14.697 Firmware Activate/Download: Not Supported 00:23:14.697 Namespace Management: Not Supported 00:23:14.697 Device Self-Test: Not Supported 00:23:14.697 Directives: Not Supported 00:23:14.697 NVMe-MI: Not Supported 00:23:14.697 Virtualization Management: Not Supported 00:23:14.697 Doorbell Buffer Config: Not Supported 00:23:14.697 Get LBA Status Capability: Not Supported 00:23:14.697 Command & Feature Lockdown Capability: Not Supported 00:23:14.697 Abort Command Limit: 4 00:23:14.697 Async Event Request Limit: 4 00:23:14.697 Number of Firmware Slots: N/A 00:23:14.697 Firmware Slot 1 Read-Only: N/A 00:23:14.697 Firmware Activation Without Reset: N/A 00:23:14.697 Multiple Update Detection Support: N/A 00:23:14.697 Firmware Update Granularity: No Information Provided 00:23:14.697 Per-Namespace SMART Log: Yes 00:23:14.697 Asymmetric Namespace Access Log Page: Supported 00:23:14.697 ANA Transition Time : 10 sec 00:23:14.697 00:23:14.697 Asymmetric Namespace Access Capabilities 00:23:14.697 ANA Optimized State : Supported 00:23:14.697 ANA Non-Optimized State : Supported 00:23:14.697 ANA Inaccessible State : Supported 00:23:14.697 ANA Persistent Loss State : Supported 00:23:14.697 ANA Change State : Supported 00:23:14.697 ANAGRPID is not changed : No 00:23:14.697 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:14.697 00:23:14.697 ANA Group Identifier Maximum : 128 00:23:14.697 Number of ANA Group Identifiers : 128 00:23:14.697 Max Number of Allowed Namespaces : 1024 00:23:14.697 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:14.697 Command Effects Log Page: Supported 00:23:14.697 Get Log Page Extended Data: Supported 00:23:14.697 Telemetry Log Pages: Not Supported 00:23:14.697 Persistent Event Log Pages: Not Supported 00:23:14.697 Supported Log Pages Log Page: May Support 00:23:14.697 Commands Supported & Effects Log Page: Not Supported 00:23:14.697 Feature Identifiers & Effects Log Page:May Support 00:23:14.698 NVMe-MI Commands & Effects Log Page: May Support 00:23:14.698 Data Area 4 for Telemetry Log: Not Supported 00:23:14.698 Error Log Page Entries Supported: 128 00:23:14.698 Keep Alive: Supported 00:23:14.698 Keep Alive Granularity: 1000 ms 00:23:14.698 00:23:14.698 NVM Command Set Attributes 00:23:14.698 ========================== 00:23:14.698 Submission Queue Entry Size 00:23:14.698 Max: 64 00:23:14.698 Min: 64 00:23:14.698 Completion Queue Entry Size 00:23:14.698 Max: 16 00:23:14.698 Min: 16 00:23:14.698 Number of Namespaces: 1024 00:23:14.698 Compare Command: Not Supported 00:23:14.698 Write Uncorrectable Command: Not Supported 00:23:14.698 Dataset Management Command: Supported 00:23:14.698 Write Zeroes Command: Supported 00:23:14.698 Set Features Save Field: Not Supported 00:23:14.698 Reservations: Not Supported 00:23:14.698 Timestamp: Not Supported 00:23:14.698 Copy: Not Supported 00:23:14.698 Volatile Write Cache: Present 00:23:14.698 Atomic Write Unit (Normal): 1 00:23:14.698 Atomic Write Unit (PFail): 1 00:23:14.698 Atomic Compare & Write Unit: 1 00:23:14.698 Fused Compare & Write: Not Supported 00:23:14.698 Scatter-Gather List 00:23:14.698 SGL Command Set: Supported 00:23:14.698 SGL Keyed: Not Supported 00:23:14.698 SGL Bit Bucket Descriptor: Not Supported 00:23:14.698 SGL Metadata Pointer: Not Supported 00:23:14.698 Oversized SGL: Not Supported 00:23:14.698 SGL Metadata Address: Not Supported 00:23:14.698 SGL Offset: Supported 00:23:14.698 Transport SGL Data Block: Not Supported 00:23:14.698 Replay Protected Memory Block: Not Supported 00:23:14.698 00:23:14.698 Firmware Slot Information 00:23:14.698 ========================= 00:23:14.698 Active slot: 0 00:23:14.698 00:23:14.698 Asymmetric Namespace Access 00:23:14.698 =========================== 00:23:14.698 Change Count : 0 00:23:14.698 Number of ANA Group Descriptors : 1 00:23:14.698 ANA Group Descriptor : 0 00:23:14.698 ANA Group ID : 1 00:23:14.698 Number of NSID Values : 1 00:23:14.698 Change Count : 0 00:23:14.698 ANA State : 1 00:23:14.698 Namespace Identifier : 1 00:23:14.698 00:23:14.698 Commands Supported and Effects 00:23:14.698 ============================== 00:23:14.698 Admin Commands 00:23:14.698 -------------- 00:23:14.698 Get Log Page (02h): Supported 00:23:14.698 Identify (06h): Supported 00:23:14.698 Abort (08h): Supported 00:23:14.698 Set Features (09h): Supported 00:23:14.698 Get Features (0Ah): Supported 00:23:14.698 Asynchronous Event Request (0Ch): Supported 00:23:14.698 Keep Alive (18h): Supported 00:23:14.698 I/O Commands 00:23:14.698 ------------ 00:23:14.698 Flush (00h): Supported 00:23:14.698 Write (01h): Supported LBA-Change 00:23:14.698 Read (02h): Supported 00:23:14.698 Write Zeroes (08h): Supported LBA-Change 00:23:14.698 Dataset Management (09h): Supported 00:23:14.698 00:23:14.698 Error Log 00:23:14.698 ========= 00:23:14.698 Entry: 0 00:23:14.698 Error Count: 0x3 00:23:14.698 Submission Queue Id: 0x0 00:23:14.698 Command Id: 0x5 00:23:14.698 Phase Bit: 0 00:23:14.698 Status Code: 0x2 00:23:14.698 Status Code Type: 0x0 00:23:14.698 Do Not Retry: 1 00:23:14.698 Error Location: 0x28 00:23:14.698 LBA: 0x0 00:23:14.698 Namespace: 0x0 00:23:14.698 Vendor Log Page: 0x0 00:23:14.698 ----------- 00:23:14.698 Entry: 1 00:23:14.698 Error Count: 0x2 00:23:14.698 Submission Queue Id: 0x0 00:23:14.698 Command Id: 0x5 00:23:14.698 Phase Bit: 0 00:23:14.698 Status Code: 0x2 00:23:14.698 Status Code Type: 0x0 00:23:14.698 Do Not Retry: 1 00:23:14.698 Error Location: 0x28 00:23:14.698 LBA: 0x0 00:23:14.698 Namespace: 0x0 00:23:14.698 Vendor Log Page: 0x0 00:23:14.698 ----------- 00:23:14.698 Entry: 2 00:23:14.698 Error Count: 0x1 00:23:14.698 Submission Queue Id: 0x0 00:23:14.698 Command Id: 0x4 00:23:14.698 Phase Bit: 0 00:23:14.698 Status Code: 0x2 00:23:14.698 Status Code Type: 0x0 00:23:14.698 Do Not Retry: 1 00:23:14.698 Error Location: 0x28 00:23:14.698 LBA: 0x0 00:23:14.698 Namespace: 0x0 00:23:14.698 Vendor Log Page: 0x0 00:23:14.698 00:23:14.698 Number of Queues 00:23:14.698 ================ 00:23:14.698 Number of I/O Submission Queues: 128 00:23:14.698 Number of I/O Completion Queues: 128 00:23:14.698 00:23:14.698 ZNS Specific Controller Data 00:23:14.698 ============================ 00:23:14.698 Zone Append Size Limit: 0 00:23:14.698 00:23:14.698 00:23:14.698 Active Namespaces 00:23:14.698 ================= 00:23:14.698 get_feature(0x05) failed 00:23:14.698 Namespace ID:1 00:23:14.698 Command Set Identifier: NVM (00h) 00:23:14.698 Deallocate: Supported 00:23:14.698 Deallocated/Unwritten Error: Not Supported 00:23:14.698 Deallocated Read Value: Unknown 00:23:14.698 Deallocate in Write Zeroes: Not Supported 00:23:14.698 Deallocated Guard Field: 0xFFFF 00:23:14.698 Flush: Supported 00:23:14.698 Reservation: Not Supported 00:23:14.698 Namespace Sharing Capabilities: Multiple Controllers 00:23:14.698 Size (in LBAs): 1953525168 (931GiB) 00:23:14.698 Capacity (in LBAs): 1953525168 (931GiB) 00:23:14.698 Utilization (in LBAs): 1953525168 (931GiB) 00:23:14.698 UUID: 664c1bef-f1dd-4cff-a7ee-7624e53933ad 00:23:14.698 Thin Provisioning: Not Supported 00:23:14.698 Per-NS Atomic Units: Yes 00:23:14.698 Atomic Boundary Size (Normal): 0 00:23:14.698 Atomic Boundary Size (PFail): 0 00:23:14.698 Atomic Boundary Offset: 0 00:23:14.698 NGUID/EUI64 Never Reused: No 00:23:14.698 ANA group ID: 1 00:23:14.698 Namespace Write Protected: No 00:23:14.698 Number of LBA Formats: 1 00:23:14.698 Current LBA Format: LBA Format #00 00:23:14.698 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:14.698 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.698 rmmod nvme_tcp 00:23:14.698 rmmod nvme_fabrics 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.698 04:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:17.287 04:24:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:18.320 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:18.320 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:18.320 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:18.320 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:18.320 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:18.320 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:18.320 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:18.320 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:18.320 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:19.256 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:19.513 00:23:19.513 real 0m9.831s 00:23:19.513 user 0m2.117s 00:23:19.513 sys 0m3.779s 00:23:19.513 04:24:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:19.513 04:24:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.513 ************************************ 00:23:19.513 END TEST nvmf_identify_kernel_target 00:23:19.513 ************************************ 00:23:19.513 04:24:07 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:19.513 04:24:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:19.513 04:24:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:19.513 04:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.513 ************************************ 00:23:19.513 START TEST nvmf_auth_host 00:23:19.513 ************************************ 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:19.513 * Looking for test storage... 00:23:19.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.513 04:24:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.514 04:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:22.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:22.041 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:22.041 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.041 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:22.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:22.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:23:22.042 00:23:22.042 --- 10.0.0.2 ping statistics --- 00:23:22.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.042 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:22.042 00:23:22.042 --- 10.0.0.1 ping statistics --- 00:23:22.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.042 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3471135 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3471135 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3471135 ']' 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:22.042 04:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=041f9f17f5823abd270be79765c33e99 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tye 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 041f9f17f5823abd270be79765c33e99 0 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 041f9f17f5823abd270be79765c33e99 0 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=041f9f17f5823abd270be79765c33e99 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:22.300 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tye 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tye 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tye 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=343c55f9c566c0df47881cb035289a208ba6768472c44ba3385d2ba4ad23b73c 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3Lw 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 343c55f9c566c0df47881cb035289a208ba6768472c44ba3385d2ba4ad23b73c 3 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 343c55f9c566c0df47881cb035289a208ba6768472c44ba3385d2ba4ad23b73c 3 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=343c55f9c566c0df47881cb035289a208ba6768472c44ba3385d2ba4ad23b73c 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3Lw 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3Lw 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3Lw 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d00e22f010ad0ce8c31bf9033c307b0da99cd335a7b52e4 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Adi 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d00e22f010ad0ce8c31bf9033c307b0da99cd335a7b52e4 0 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d00e22f010ad0ce8c31bf9033c307b0da99cd335a7b52e4 0 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d00e22f010ad0ce8c31bf9033c307b0da99cd335a7b52e4 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Adi 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Adi 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Adi 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.558 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e78583810bb009c4312fd35c17e202bd9b0d504568dee28e 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.I1d 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e78583810bb009c4312fd35c17e202bd9b0d504568dee28e 2 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e78583810bb009c4312fd35c17e202bd9b0d504568dee28e 2 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e78583810bb009c4312fd35c17e202bd9b0d504568dee28e 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.I1d 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.I1d 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.I1d 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=452d34e8fdd236bd32c26ca407aab964 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6Du 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 452d34e8fdd236bd32c26ca407aab964 1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 452d34e8fdd236bd32c26ca407aab964 1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=452d34e8fdd236bd32c26ca407aab964 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6Du 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6Du 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6Du 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cdacd58aac1c760acb06c0208bcbf370 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5x2 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cdacd58aac1c760acb06c0208bcbf370 1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cdacd58aac1c760acb06c0208bcbf370 1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cdacd58aac1c760acb06c0208bcbf370 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:22.559 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5x2 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5x2 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.5x2 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9f7b10fbd0333a0be0edb023465e60440b9217bd999a0282 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7JW 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9f7b10fbd0333a0be0edb023465e60440b9217bd999a0282 2 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9f7b10fbd0333a0be0edb023465e60440b9217bd999a0282 2 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9f7b10fbd0333a0be0edb023465e60440b9217bd999a0282 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7JW 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7JW 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7JW 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c87a03d84447086d0a15bac3842537d 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JSY 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c87a03d84447086d0a15bac3842537d 0 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c87a03d84447086d0a15bac3842537d 0 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c87a03d84447086d0a15bac3842537d 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JSY 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JSY 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JSY 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e7833cf427ec7f34f39c130a8da38f19420b7e123259a397c7461d0502656c0 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rLa 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e7833cf427ec7f34f39c130a8da38f19420b7e123259a397c7461d0502656c0 3 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e7833cf427ec7f34f39c130a8da38f19420b7e123259a397c7461d0502656c0 3 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e7833cf427ec7f34f39c130a8da38f19420b7e123259a397c7461d0502656c0 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rLa 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rLa 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rLa 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3471135 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3471135 ']' 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:22.817 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tye 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3Lw ]] 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Lw 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.075 04:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.075 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.075 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.075 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Adi 00:23:23.075 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.075 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.I1d ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.I1d 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6Du 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.5x2 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5x2 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7JW 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JSY ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JSY 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rLa 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:23.076 04:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:24.449 Waiting for block devices as requested 00:23:24.449 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:24.449 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:24.449 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:24.706 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:24.706 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:24.706 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:24.706 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:24.964 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:24.964 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:24.964 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:24.964 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:25.221 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:25.221 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:25.221 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:25.221 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:25.479 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:25.479 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:25.736 04:24:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:25.994 No valid GPT data, bailing 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:25.994 00:23:25.994 Discovery Log Number of Records 2, Generation counter 2 00:23:25.994 =====Discovery Log Entry 0====== 00:23:25.994 trtype: tcp 00:23:25.994 adrfam: ipv4 00:23:25.994 subtype: current discovery subsystem 00:23:25.994 treq: not specified, sq flow control disable supported 00:23:25.994 portid: 1 00:23:25.994 trsvcid: 4420 00:23:25.994 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:25.994 traddr: 10.0.0.1 00:23:25.994 eflags: none 00:23:25.994 sectype: none 00:23:25.994 =====Discovery Log Entry 1====== 00:23:25.994 trtype: tcp 00:23:25.994 adrfam: ipv4 00:23:25.994 subtype: nvme subsystem 00:23:25.994 treq: not specified, sq flow control disable supported 00:23:25.994 portid: 1 00:23:25.994 trsvcid: 4420 00:23:25.994 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:25.994 traddr: 10.0.0.1 00:23:25.994 eflags: none 00:23:25.994 sectype: none 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.994 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.995 nvme0n1 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.995 04:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:26.253 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 nvme0n1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.511 nvme0n1 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.511 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.512 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.770 nvme0n1 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.770 nvme0n1 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.770 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.028 nvme0n1 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.028 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.029 04:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.322 nvme0n1 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.322 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.581 nvme0n1 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.581 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.581 nvme0n1 00:23:27.582 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.582 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.582 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.582 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.582 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.582 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.840 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.841 nvme0n1 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.841 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.099 04:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.099 nvme0n1 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.099 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.356 nvme0n1 00:23:28.356 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.356 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.356 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.356 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.356 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.356 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.614 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.615 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.872 nvme0n1 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.873 04:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.131 nvme0n1 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.131 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 nvme0n1 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.389 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.390 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.390 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.390 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.390 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.390 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.390 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.390 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.648 nvme0n1 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.648 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.906 04:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.164 nvme0n1 00:23:30.164 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.422 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.994 nvme0n1 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.994 04:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.615 nvme0n1 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.615 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.874 nvme0n1 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.874 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.132 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.133 04:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.698 nvme0n1 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.698 04:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.632 nvme0n1 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:33.632 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.633 04:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.567 nvme0n1 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.567 04:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.501 nvme0n1 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.501 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.502 04:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.437 nvme0n1 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.437 04:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.371 nvme0n1 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.371 nvme0n1 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.371 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.629 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.629 nvme0n1 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.630 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.888 nvme0n1 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.888 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.889 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.889 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.889 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.889 04:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.889 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.889 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.889 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 nvme0n1 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.147 04:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 nvme0n1 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.147 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.407 nvme0n1 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:38.407 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.408 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.666 nvme0n1 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.666 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.667 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.667 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.667 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.925 nvme0n1 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.925 04:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.183 nvme0n1 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:39.183 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.184 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 nvme0n1 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.442 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.701 nvme0n1 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.701 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.702 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.960 nvme0n1 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.960 04:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.218 nvme0n1 00:23:40.218 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.218 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.218 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.218 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.218 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.218 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.476 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.734 nvme0n1 00:23:40.734 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.734 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.734 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.734 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.734 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.734 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.734 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.735 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.993 nvme0n1 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.993 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.994 04:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.559 nvme0n1 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.559 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.560 04:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.125 nvme0n1 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.125 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.691 nvme0n1 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.691 04:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.258 nvme0n1 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.258 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.824 nvme0n1 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.824 04:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.782 nvme0n1 00:23:44.782 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.782 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.782 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.782 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.782 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.049 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.050 04:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.985 nvme0n1 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.985 04:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.920 nvme0n1 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.920 04:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.855 nvme0n1 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.855 04:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.789 nvme0n1 00:23:48.789 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.789 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.789 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.789 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.789 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.789 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.789 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.790 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.047 nvme0n1 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.047 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.048 04:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.306 nvme0n1 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.306 nvme0n1 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.306 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.307 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.307 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.307 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.307 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.307 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.307 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.565 nvme0n1 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:49.565 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.566 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.825 nvme0n1 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.825 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.084 nvme0n1 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.084 04:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.344 nvme0n1 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.344 nvme0n1 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.344 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.603 nvme0n1 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.603 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.863 nvme0n1 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.863 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.864 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.864 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.864 04:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.864 04:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.864 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.864 04:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.122 nvme0n1 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.122 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.380 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.638 nvme0n1 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.638 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.895 nvme0n1 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.895 04:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.152 nvme0n1 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.152 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.409 nvme0n1 00:23:52.409 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.409 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.409 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.409 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.409 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.409 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.667 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.232 nvme0n1 00:23:53.232 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.233 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.233 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.233 04:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.233 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.233 04:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.233 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.799 nvme0n1 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.799 04:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.365 nvme0n1 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.365 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.931 nvme0n1 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.931 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.932 04:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.497 nvme0n1 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQxZjlmMTdmNTgyM2FiZDI3MGJlNzk3NjVjMzNlOTlbLtvr: 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQzYzU1ZjljNTY2YzBkZjQ3ODgxY2IwMzUyODlhMjA4YmE2NzY4NDcyYzQ0YmEzMzg1ZDJiYTRhZDIzYjczY5tsFoI=: 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.497 04:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.429 nvme0n1 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.429 04:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.360 nvme0n1 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDUyZDM0ZThmZGQyMzZiZDMyYzI2Y2E0MDdhYWI5NjRcj6b8: 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: ]] 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2RhY2Q1OGFhYzFjNzYwYWNiMDZjMDIwOGJjYmYzNzAGDR9a: 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.360 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.361 04:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.294 nvme0n1 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWY3YjEwZmJkMDMzM2EwYmUwZWRiMDIzNDY1ZTYwNDQwYjkyMTdiZDk5OWEwMjgyx0nGxw==: 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM4N2EwM2Q4NDQ0NzA4NmQwYTE1YmFjMzg0MjUzN2TbLa1c: 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.294 04:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.237 nvme0n1 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U3ODMzY2Y0MjdlYzdmMzRmMzljMTMwYThkYTM4ZjE5NDIwYjdlMTIzMjU5YTM5N2M3NDYxZDA1MDI2NTZjMHa9TWY=: 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.237 04:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.171 nvme0n1 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQwMGUyMmYwMTBhZDBjZThjMzFiZjkwMzNjMzA3YjBkYTk5Y2QzMzVhN2I1MmU0RvffRg==: 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: ]] 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTc4NTgzODEwYmIwMDljNDMxMmZkMzVjMTdlMjAyYmQ5YjBkNTA0NTY4ZGVlMjhlVUWPsg==: 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.172 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.430 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 request: 00:24:00.431 { 00:24:00.431 "name": "nvme0", 00:24:00.431 "trtype": "tcp", 00:24:00.431 "traddr": "10.0.0.1", 00:24:00.431 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "4420", 00:24:00.431 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:00.431 "method": "bdev_nvme_attach_controller", 00:24:00.431 "req_id": 1 00:24:00.431 } 00:24:00.431 Got JSON-RPC error response 00:24:00.431 response: 00:24:00.431 { 00:24:00.431 "code": -32602, 00:24:00.431 "message": "Invalid parameters" 00:24:00.431 } 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 request: 00:24:00.431 { 00:24:00.431 "name": "nvme0", 00:24:00.431 "trtype": "tcp", 00:24:00.431 "traddr": "10.0.0.1", 00:24:00.431 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "4420", 00:24:00.431 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:00.431 "dhchap_key": "key2", 00:24:00.431 "method": "bdev_nvme_attach_controller", 00:24:00.431 "req_id": 1 00:24:00.431 } 00:24:00.431 Got JSON-RPC error response 00:24:00.431 response: 00:24:00.431 { 00:24:00.431 "code": -32602, 00:24:00.431 "message": "Invalid parameters" 00:24:00.431 } 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 request: 00:24:00.431 { 00:24:00.431 "name": "nvme0", 00:24:00.431 "trtype": "tcp", 00:24:00.431 "traddr": "10.0.0.1", 00:24:00.431 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "4420", 00:24:00.431 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:00.431 "dhchap_key": "key1", 00:24:00.431 "dhchap_ctrlr_key": "ckey2", 00:24:00.431 "method": "bdev_nvme_attach_controller", 00:24:00.431 "req_id": 1 00:24:00.431 } 00:24:00.431 Got JSON-RPC error response 00:24:00.431 response: 00:24:00.431 { 00:24:00.431 "code": -32602, 00:24:00.431 "message": "Invalid parameters" 00:24:00.431 } 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.431 rmmod nvme_tcp 00:24:00.431 rmmod nvme_fabrics 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3471135 ']' 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3471135 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3471135 ']' 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3471135 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:24:00.431 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:00.432 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3471135 00:24:00.432 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:00.432 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:00.432 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3471135' 00:24:00.432 killing process with pid 3471135 00:24:00.432 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3471135 00:24:00.432 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3471135 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.690 04:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:03.224 04:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:04.160 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:04.160 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:04.160 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:04.160 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:04.160 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:04.160 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:04.160 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:04.160 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:04.160 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:05.095 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:05.355 04:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tye /tmp/spdk.key-null.Adi /tmp/spdk.key-sha256.6Du /tmp/spdk.key-sha384.7JW /tmp/spdk.key-sha512.rLa /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:05.355 04:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:06.729 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:06.729 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:06.729 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:06.729 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:06.729 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:06.729 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:06.729 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:06.729 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:06.729 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:06.729 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:06.729 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:06.729 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:06.729 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:06.729 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:06.729 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:06.729 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:06.729 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:06.729 00:24:06.729 real 0m47.342s 00:24:06.729 user 0m44.615s 00:24:06.729 sys 0m6.061s 00:24:06.729 04:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:06.729 04:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.729 ************************************ 00:24:06.729 END TEST nvmf_auth_host 00:24:06.729 ************************************ 00:24:06.729 04:24:54 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:24:06.729 04:24:54 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:06.729 04:24:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:06.729 04:24:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:06.729 04:24:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.988 ************************************ 00:24:06.988 START TEST nvmf_digest 00:24:06.988 ************************************ 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:06.988 * Looking for test storage... 00:24:06.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.988 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.989 04:24:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.521 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.521 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.521 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.521 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.522 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:24:09.522 00:24:09.522 --- 10.0.0.2 ping statistics --- 00:24:09.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.522 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:24:09.522 00:24:09.522 --- 10.0.0.1 ping statistics --- 00:24:09.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.522 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:09.522 ************************************ 00:24:09.522 START TEST nvmf_digest_clean 00:24:09.522 ************************************ 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3480884 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3480884 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3480884 ']' 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:09.522 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:09.781 [2024-05-15 04:24:57.576120] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:09.781 [2024-05-15 04:24:57.576185] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.781 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.781 [2024-05-15 04:24:57.654514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.781 [2024-05-15 04:24:57.774381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.781 [2024-05-15 04:24:57.774436] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.781 [2024-05-15 04:24:57.774453] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.781 [2024-05-15 04:24:57.774467] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.781 [2024-05-15 04:24:57.774478] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.781 [2024-05-15 04:24:57.774515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:10.040 null0 00:24:10.040 [2024-05-15 04:24:57.964657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.040 [2024-05-15 04:24:57.988640] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:10.040 [2024-05-15 04:24:57.988911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3481024 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3481024 /var/tmp/bperf.sock 00:24:10.040 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:10.041 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3481024 ']' 00:24:10.041 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:10.041 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.041 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:10.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:10.041 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.041 04:24:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:10.041 [2024-05-15 04:24:58.037926] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:10.041 [2024-05-15 04:24:58.038008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481024 ] 00:24:10.299 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.299 [2024-05-15 04:24:58.108552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.299 [2024-05-15 04:24:58.212962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.299 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:10.299 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:10.299 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:10.299 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:10.299 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:10.557 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.557 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.122 nvme0n1 00:24:11.122 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:11.122 04:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:11.122 Running I/O for 2 seconds... 00:24:13.652 00:24:13.652 Latency(us) 00:24:13.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.652 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:13.652 nvme0n1 : 2.00 18598.73 72.65 0.00 0.00 6872.74 3301.07 12233.39 00:24:13.652 =================================================================================================================== 00:24:13.652 Total : 18598.73 72.65 0.00 0.00 6872.74 3301.07 12233.39 00:24:13.652 0 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:13.652 | select(.opcode=="crc32c") 00:24:13.652 | "\(.module_name) \(.executed)"' 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3481024 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3481024 ']' 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3481024 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:13.652 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:13.653 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3481024 00:24:13.653 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:13.653 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:13.653 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3481024' 00:24:13.653 killing process with pid 3481024 00:24:13.653 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3481024 00:24:13.653 Received shutdown signal, test time was about 2.000000 seconds 00:24:13.653 00:24:13.653 Latency(us) 00:24:13.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.653 =================================================================================================================== 00:24:13.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.653 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3481024 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3481434 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3481434 /var/tmp/bperf.sock 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3481434 ']' 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:13.911 04:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:13.911 [2024-05-15 04:25:01.729660] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:13.911 [2024-05-15 04:25:01.729742] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481434 ] 00:24:13.911 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.911 Zero copy mechanism will not be used. 00:24:13.911 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.912 [2024-05-15 04:25:01.805129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.912 [2024-05-15 04:25:01.926316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.844 04:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:14.844 04:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:14.844 04:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:14.844 04:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:14.844 04:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:15.102 04:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.102 04:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.690 nvme0n1 00:24:15.690 04:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:15.690 04:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:15.690 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:15.690 Zero copy mechanism will not be used. 00:24:15.690 Running I/O for 2 seconds... 00:24:18.219 00:24:18.219 Latency(us) 00:24:18.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.219 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:18.219 nvme0n1 : 2.00 2110.99 263.87 0.00 0.00 7575.69 7427.41 14369.37 00:24:18.219 =================================================================================================================== 00:24:18.219 Total : 2110.99 263.87 0.00 0.00 7575.69 7427.41 14369.37 00:24:18.219 0 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:18.219 | select(.opcode=="crc32c") 00:24:18.219 | "\(.module_name) \(.executed)"' 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3481434 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3481434 ']' 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3481434 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3481434 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3481434' 00:24:18.219 killing process with pid 3481434 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3481434 00:24:18.219 Received shutdown signal, test time was about 2.000000 seconds 00:24:18.219 00:24:18.219 Latency(us) 00:24:18.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.219 =================================================================================================================== 00:24:18.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.219 04:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3481434 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3481968 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3481968 /var/tmp/bperf.sock 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3481968 ']' 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:18.478 [2024-05-15 04:25:06.279445] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:18.478 [2024-05-15 04:25:06.279522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481968 ] 00:24:18.478 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.478 [2024-05-15 04:25:06.346998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.478 [2024-05-15 04:25:06.454881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:18.478 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:19.045 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.045 04:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.303 nvme0n1 00:24:19.303 04:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:19.303 04:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.303 Running I/O for 2 seconds... 00:24:21.832 00:24:21.832 Latency(us) 00:24:21.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.832 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:21.832 nvme0n1 : 2.01 18687.48 73.00 0.00 0.00 6832.73 6092.42 16796.63 00:24:21.832 =================================================================================================================== 00:24:21.832 Total : 18687.48 73.00 0.00 0.00 6832.73 6092.42 16796.63 00:24:21.832 0 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:21.832 | select(.opcode=="crc32c") 00:24:21.832 | "\(.module_name) \(.executed)"' 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3481968 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3481968 ']' 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3481968 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3481968 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3481968' 00:24:21.832 killing process with pid 3481968 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3481968 00:24:21.832 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.832 00:24:21.832 Latency(us) 00:24:21.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.832 =================================================================================================================== 00:24:21.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.832 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3481968 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3482381 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3482381 /var/tmp/bperf.sock 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3482381 ']' 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:22.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:22.091 04:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:22.091 [2024-05-15 04:25:09.927057] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:22.091 [2024-05-15 04:25:09.927136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482381 ] 00:24:22.091 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.091 Zero copy mechanism will not be used. 00:24:22.091 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.091 [2024-05-15 04:25:10.002152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.351 [2024-05-15 04:25:10.127438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.917 04:25:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:22.917 04:25:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:22.917 04:25:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:22.917 04:25:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:22.917 04:25:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:23.481 04:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.481 04:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.739 nvme0n1 00:24:23.739 04:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:23.739 04:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:23.997 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.997 Zero copy mechanism will not be used. 00:24:23.997 Running I/O for 2 seconds... 00:24:25.896 00:24:25.896 Latency(us) 00:24:25.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.896 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:25.896 nvme0n1 : 2.01 1434.16 179.27 0.00 0.00 11120.89 8592.50 17767.54 00:24:25.896 =================================================================================================================== 00:24:25.896 Total : 1434.16 179.27 0.00 0.00 11120.89 8592.50 17767.54 00:24:25.896 0 00:24:25.896 04:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:25.897 04:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:25.897 04:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:25.897 04:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:25.897 04:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:25.897 | select(.opcode=="crc32c") 00:24:25.897 | "\(.module_name) \(.executed)"' 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3482381 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3482381 ']' 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3482381 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3482381 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3482381' 00:24:26.155 killing process with pid 3482381 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3482381 00:24:26.155 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.155 00:24:26.155 Latency(us) 00:24:26.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.155 =================================================================================================================== 00:24:26.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.155 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3482381 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3480884 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3480884 ']' 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3480884 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3480884 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3480884' 00:24:26.413 killing process with pid 3480884 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3480884 00:24:26.413 [2024-05-15 04:25:14.351596] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:26.413 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3480884 00:24:26.672 00:24:26.672 real 0m17.102s 00:24:26.672 user 0m34.630s 00:24:26.672 sys 0m4.025s 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:26.672 ************************************ 00:24:26.672 END TEST nvmf_digest_clean 00:24:26.672 ************************************ 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.672 ************************************ 00:24:26.672 START TEST nvmf_digest_error 00:24:26.672 ************************************ 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:26.672 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3482965 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3482965 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3482965 ']' 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:26.930 04:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.930 [2024-05-15 04:25:14.736257] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:26.931 [2024-05-15 04:25:14.736355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.931 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.931 [2024-05-15 04:25:14.825483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.931 [2024-05-15 04:25:14.944295] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.931 [2024-05-15 04:25:14.944353] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.931 [2024-05-15 04:25:14.944369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.931 [2024-05-15 04:25:14.944383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.931 [2024-05-15 04:25:14.944402] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.931 [2024-05-15 04:25:14.944431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.863 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:27.863 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.864 [2024-05-15 04:25:15.746940] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.864 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.864 null0 00:24:27.864 [2024-05-15 04:25:15.864523] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.122 [2024-05-15 04:25:15.888512] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:28.122 [2024-05-15 04:25:15.888784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3483203 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3483203 /var/tmp/bperf.sock 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3483203 ']' 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:28.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:28.122 04:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.122 [2024-05-15 04:25:15.941422] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:28.122 [2024-05-15 04:25:15.941520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483203 ] 00:24:28.122 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.122 [2024-05-15 04:25:16.016779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.380 [2024-05-15 04:25:16.138813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.946 04:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:28.946 04:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:28.946 04:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.946 04:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:29.204 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:29.204 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.204 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.204 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.204 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:29.204 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:29.770 nvme0n1 00:24:29.770 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:29.770 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.770 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.770 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.770 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:29.770 04:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:29.770 Running I/O for 2 seconds... 00:24:29.770 [2024-05-15 04:25:17.681965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.770 [2024-05-15 04:25:17.682028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-05-15 04:25:17.682048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.770 [2024-05-15 04:25:17.696746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.770 [2024-05-15 04:25:17.696782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-05-15 04:25:17.696801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.770 [2024-05-15 04:25:17.708969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.770 [2024-05-15 04:25:17.709000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-05-15 04:25:17.709017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.770 [2024-05-15 04:25:17.722416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.770 [2024-05-15 04:25:17.722445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-05-15 04:25:17.722460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.770 [2024-05-15 04:25:17.734884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.770 [2024-05-15 04:25:17.734941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-05-15 04:25:17.734961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.770 [2024-05-15 04:25:17.748403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.771 [2024-05-15 04:25:17.748447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-05-15 04:25:17.748464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.771 [2024-05-15 04:25:17.760870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.771 [2024-05-15 04:25:17.760905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-05-15 04:25:17.760924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.771 [2024-05-15 04:25:17.773520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:29.771 [2024-05-15 04:25:17.773554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-05-15 04:25:17.773573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.029 [2024-05-15 04:25:17.787751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.029 [2024-05-15 04:25:17.787785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.029 [2024-05-15 04:25:17.787804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.029 [2024-05-15 04:25:17.800574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.029 [2024-05-15 04:25:17.800608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.029 [2024-05-15 04:25:17.800628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.029 [2024-05-15 04:25:17.813736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.029 [2024-05-15 04:25:17.813770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.029 [2024-05-15 04:25:17.813788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.029 [2024-05-15 04:25:17.827407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.029 [2024-05-15 04:25:17.827442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.029 [2024-05-15 04:25:17.827468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.029 [2024-05-15 04:25:17.840742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.029 [2024-05-15 04:25:17.840776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.840795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.854363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.854398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.854416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.867868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.867902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.867921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.880973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.881006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.881025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.894757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.894791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.894809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.908192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.908226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.908244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.921583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.921617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.921636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.936481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.936515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.936533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.948187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.948236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.948256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.962763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.962797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.962815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.975955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.975989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.976007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:17.988703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:17.988737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:17.988755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:18.004077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:18.004110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:18.004129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:18.016492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:18.016526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:18.016545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:18.029523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:18.029557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:18.029575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.030 [2024-05-15 04:25:18.043905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.030 [2024-05-15 04:25:18.043959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.030 [2024-05-15 04:25:18.043979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.058288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.058323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.058342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.069906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.069957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.069978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.085267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.085312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.085331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.098030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.098064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.098082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.112068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.112121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.124215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.124250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.124273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.139865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.139899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.139918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.152612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.152647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.152666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.165840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.165874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.165892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.179512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.179552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.179572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.192993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.193026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.193045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.206701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.206735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.206753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.220740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.220773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.220792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.233325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.233359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.233377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.246083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.246116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.246135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.262072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.262106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.262124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.273223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.273257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.273275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.288034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.288068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.288086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.289 [2024-05-15 04:25:18.301895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.289 [2024-05-15 04:25:18.301936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.289 [2024-05-15 04:25:18.301957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.314880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.314913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.314941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.328505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.328539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.328558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.342107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.342141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.342160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.354299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.354333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.354352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.368826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.368859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.368878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.381565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.381598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.381617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.395107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.395141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.395159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.409476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.409510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.409535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.423085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.423120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.423138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.437271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.437305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.437324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.449297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.449329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.449348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.464633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.464667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.464686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.478286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.478319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.478338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.492093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.492127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.492146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.505586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.505620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.505638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.518085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.518119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.518137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.532315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.532355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.532374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.545672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.545707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.545727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.548 [2024-05-15 04:25:18.557927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.548 [2024-05-15 04:25:18.557968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.548 [2024-05-15 04:25:18.557986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.573098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.573142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.573166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.587002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.587051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.587086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.602745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.602781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.602801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.614712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.614747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.614766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.627737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.627771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.627789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.642964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.642997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.643016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.654140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.654174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.654192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.669459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.669492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.669512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.683637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.683670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.683688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.695599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.695632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.695650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.710098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.710130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.710148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.723107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.723140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.723158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.736515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.736547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.736565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.751357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.751389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.751407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.763050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.763082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.763108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.777562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.777594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.790449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.790481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.790500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.804860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.804892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.804910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.838 [2024-05-15 04:25:18.818506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:30.838 [2024-05-15 04:25:18.818538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.838 [2024-05-15 04:25:18.818556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.095 [2024-05-15 04:25:18.832615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.095 [2024-05-15 04:25:18.832650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-05-15 04:25:18.832668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.095 [2024-05-15 04:25:18.845678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.095 [2024-05-15 04:25:18.845710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-05-15 04:25:18.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.095 [2024-05-15 04:25:18.859676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.095 [2024-05-15 04:25:18.859709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-05-15 04:25:18.859726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.095 [2024-05-15 04:25:18.871794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.095 [2024-05-15 04:25:18.871826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-05-15 04:25:18.871845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.095 [2024-05-15 04:25:18.885544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.885577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.885596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.900153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.900185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.900203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.914159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.914190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.914208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.926198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.926231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.926249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.941133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.941167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.941186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.954552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.954584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.954602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.967607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.967639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.967657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.981599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.981632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.981650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:18.995058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:18.995090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:18.995114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.006503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.006535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.006553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.021643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.021677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.021694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.034496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.034528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.034546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.049407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.049438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.049456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.060823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.060855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.060873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.074976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.075008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.075026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.088293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.088325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.088343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.096 [2024-05-15 04:25:19.101205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.096 [2024-05-15 04:25:19.101237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-05-15 04:25:19.101255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.114848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.114886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.114905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.129971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.130003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.130021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.141981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.142014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.142032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.156164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.156196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.156214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.169997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.170030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.170048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.182128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.182161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.182179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.196073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.196107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.196126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.210234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.210266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.210285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.223385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.223418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.223436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.236765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.236798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.236816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.249336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.249369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.249387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.263661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.263695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.263714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.277651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.277684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.277702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.289447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.289479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.289497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.304041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.304079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.304097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.318139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.318171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.318189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.330684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.330716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.330734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.344816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.344849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.344873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.353 [2024-05-15 04:25:19.358522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.353 [2024-05-15 04:25:19.358554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.353 [2024-05-15 04:25:19.358572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.610 [2024-05-15 04:25:19.370496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.610 [2024-05-15 04:25:19.370529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.610 [2024-05-15 04:25:19.370546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.610 [2024-05-15 04:25:19.384730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.610 [2024-05-15 04:25:19.384763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.610 [2024-05-15 04:25:19.384781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.610 [2024-05-15 04:25:19.398701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.610 [2024-05-15 04:25:19.398733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.610 [2024-05-15 04:25:19.398751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.610 [2024-05-15 04:25:19.412067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.610 [2024-05-15 04:25:19.412099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.412117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.424984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.425016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.425034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.438770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.438803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.438821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.451245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.451278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.451296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.465415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.465455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.465475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.479175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.479206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.479224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.492875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.492907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.492926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.504974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.505007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.505024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.519323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.519355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.519374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.532630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.532665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.532683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.546367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.546399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.546417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.559813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.559845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.559864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.573914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.573954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.573979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.586330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.586364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.586382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.600427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.600459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.600477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.611 [2024-05-15 04:25:19.615194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.611 [2024-05-15 04:25:19.615226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.611 [2024-05-15 04:25:19.615244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.868 [2024-05-15 04:25:19.627100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.868 [2024-05-15 04:25:19.627140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.868 [2024-05-15 04:25:19.627160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.868 [2024-05-15 04:25:19.642134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.868 [2024-05-15 04:25:19.642168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.868 [2024-05-15 04:25:19.642187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.868 [2024-05-15 04:25:19.656362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.868 [2024-05-15 04:25:19.656396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.868 [2024-05-15 04:25:19.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.868 [2024-05-15 04:25:19.668547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ea720) 00:24:31.868 [2024-05-15 04:25:19.668580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.868 [2024-05-15 04:25:19.668599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.868 00:24:31.868 Latency(us) 00:24:31.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.868 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:31.868 nvme0n1 : 2.00 18856.45 73.66 0.00 0.00 6777.45 3131.16 17476.27 00:24:31.868 =================================================================================================================== 00:24:31.868 Total : 18856.45 73.66 0.00 0.00 6777.45 3131.16 17476.27 00:24:31.868 0 00:24:31.868 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:31.868 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:31.868 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:31.868 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:31.868 | .driver_specific 00:24:31.868 | .nvme_error 00:24:31.868 | .status_code 00:24:31.868 | .command_transient_transport_error' 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3483203 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3483203 ']' 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3483203 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3483203 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3483203' 00:24:32.125 killing process with pid 3483203 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3483203 00:24:32.125 Received shutdown signal, test time was about 2.000000 seconds 00:24:32.125 00:24:32.125 Latency(us) 00:24:32.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.125 =================================================================================================================== 00:24:32.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.125 04:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3483203 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3483635 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3483635 /var/tmp/bperf.sock 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3483635 ']' 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:32.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:32.383 04:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:32.383 [2024-05-15 04:25:20.306849] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:32.383 [2024-05-15 04:25:20.306958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483635 ] 00:24:32.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:32.383 Zero copy mechanism will not be used. 00:24:32.383 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.383 [2024-05-15 04:25:20.385311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.641 [2024-05-15 04:25:20.508839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.572 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:33.572 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:33.572 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:33.572 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:33.829 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:33.829 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.829 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.829 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.829 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:33.829 04:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:34.087 nvme0n1 00:24:34.087 04:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:34.087 04:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.087 04:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.087 04:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.087 04:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:34.087 04:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:34.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:34.345 Zero copy mechanism will not be used. 00:24:34.345 Running I/O for 2 seconds... 00:24:34.345 [2024-05-15 04:25:22.156390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.156445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.156467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.171768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.171804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.171824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.187078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.187120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.187139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.202341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.202374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.202393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.217594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.217626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.217645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.232870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.232902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.232921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.248047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.248080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.248099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.263152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.263184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.263202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.278283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.278315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.278334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.293434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.293466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.293484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.308564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.308597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.308615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.324498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.324531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.324549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.339853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.339885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.345 [2024-05-15 04:25:22.339903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.345 [2024-05-15 04:25:22.354988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.345 [2024-05-15 04:25:22.355021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.346 [2024-05-15 04:25:22.355039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.370247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.370279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.370297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.385865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.385897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.385915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.401446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.401478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.401496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.417110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.417142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.417160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.432684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.432716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.432735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.447973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.448004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.448029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.463158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.463190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.463209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.478566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.478599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.478617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.493746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.493778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.493796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.508904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.508943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.604 [2024-05-15 04:25:22.508963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.604 [2024-05-15 04:25:22.524097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.604 [2024-05-15 04:25:22.524129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.605 [2024-05-15 04:25:22.524147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.605 [2024-05-15 04:25:22.539246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.605 [2024-05-15 04:25:22.539279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.605 [2024-05-15 04:25:22.539296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.605 [2024-05-15 04:25:22.555305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.605 [2024-05-15 04:25:22.555338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.605 [2024-05-15 04:25:22.555357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.605 [2024-05-15 04:25:22.570526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.605 [2024-05-15 04:25:22.570559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.605 [2024-05-15 04:25:22.570577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.605 [2024-05-15 04:25:22.585697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.605 [2024-05-15 04:25:22.585741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.605 [2024-05-15 04:25:22.585759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.605 [2024-05-15 04:25:22.600944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.605 [2024-05-15 04:25:22.600977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.605 [2024-05-15 04:25:22.600994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.605 [2024-05-15 04:25:22.616177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.605 [2024-05-15 04:25:22.616210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.605 [2024-05-15 04:25:22.616227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.631377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.631420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.631438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.646530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.646564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.646581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.661735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.661768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.661786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.676899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.676940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.676961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.692122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.692154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.692173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.707284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.707316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.707340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.722525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.722558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.722576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.737946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.737985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.738002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.753120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.753152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.753171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.768402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.863 [2024-05-15 04:25:22.768434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.863 [2024-05-15 04:25:22.768452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.863 [2024-05-15 04:25:22.783656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.864 [2024-05-15 04:25:22.783687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.864 [2024-05-15 04:25:22.783705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.864 [2024-05-15 04:25:22.798959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.864 [2024-05-15 04:25:22.798991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.864 [2024-05-15 04:25:22.799009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.864 [2024-05-15 04:25:22.814119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.864 [2024-05-15 04:25:22.814151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.864 [2024-05-15 04:25:22.814169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.864 [2024-05-15 04:25:22.829268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.864 [2024-05-15 04:25:22.829300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.864 [2024-05-15 04:25:22.829318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.864 [2024-05-15 04:25:22.844590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.864 [2024-05-15 04:25:22.844629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.864 [2024-05-15 04:25:22.844648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.864 [2024-05-15 04:25:22.859707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.864 [2024-05-15 04:25:22.859739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.864 [2024-05-15 04:25:22.859757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.864 [2024-05-15 04:25:22.874979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:34.864 [2024-05-15 04:25:22.875010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.864 [2024-05-15 04:25:22.875028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.122 [2024-05-15 04:25:22.890119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.122 [2024-05-15 04:25:22.890150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.122 [2024-05-15 04:25:22.890168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.122 [2024-05-15 04:25:22.905478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.122 [2024-05-15 04:25:22.905510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.122 [2024-05-15 04:25:22.905528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.122 [2024-05-15 04:25:22.920623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.122 [2024-05-15 04:25:22.920654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.122 [2024-05-15 04:25:22.920672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.122 [2024-05-15 04:25:22.935737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.122 [2024-05-15 04:25:22.935769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.122 [2024-05-15 04:25:22.935786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.122 [2024-05-15 04:25:22.950918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.122 [2024-05-15 04:25:22.950957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.122 [2024-05-15 04:25:22.950976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.122 [2024-05-15 04:25:22.966190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:22.966223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:22.966240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:22.981561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:22.981593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:22.981611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:22.996716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:22.996747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:22.996765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.011845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.011877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.011895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.027008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.027039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.027057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.042139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.042170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.042188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.057311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.057342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.057360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.072448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.072479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.072496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.087688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.087719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.087737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.102975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.103009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.103035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.118249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.118282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.118300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.123 [2024-05-15 04:25:23.133680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.123 [2024-05-15 04:25:23.133713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.123 [2024-05-15 04:25:23.133740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.148861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.148895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.148914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.164012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.164043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.164061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.179127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.179158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.179176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.194494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.194527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.194546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.209666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.209698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.209716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.225032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.225065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.225083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.240805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.240837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.240855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.256107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.256138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.256155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.271342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.271374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.271391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.286692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.286725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.286742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.302072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.382 [2024-05-15 04:25:23.302103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.382 [2024-05-15 04:25:23.302121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.382 [2024-05-15 04:25:23.317329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.383 [2024-05-15 04:25:23.317359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.383 [2024-05-15 04:25:23.317377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.383 [2024-05-15 04:25:23.332456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.383 [2024-05-15 04:25:23.332488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.383 [2024-05-15 04:25:23.332506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.383 [2024-05-15 04:25:23.347658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.383 [2024-05-15 04:25:23.347693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.383 [2024-05-15 04:25:23.347712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.383 [2024-05-15 04:25:23.363092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.383 [2024-05-15 04:25:23.363125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.383 [2024-05-15 04:25:23.363149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.383 [2024-05-15 04:25:23.378457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.383 [2024-05-15 04:25:23.378491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.383 [2024-05-15 04:25:23.378509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.383 [2024-05-15 04:25:23.393689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.383 [2024-05-15 04:25:23.393721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.383 [2024-05-15 04:25:23.393739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.409065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.409096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.409114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.424371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.424403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.424421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.439496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.439529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.439547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.454680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.454714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.454732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.469967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.470000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.470018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.485158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.485201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.485219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.500333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.500370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.500389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.515496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.515528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.515546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.530641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.530674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.530691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.545997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.546028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.546046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.561138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.561170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.561187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.576551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.576583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.576601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.591715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.591747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.591764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.607146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.607179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.607198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.622274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.622306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.622324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.637694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.641 [2024-05-15 04:25:23.637726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.641 [2024-05-15 04:25:23.637744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.641 [2024-05-15 04:25:23.652901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.642 [2024-05-15 04:25:23.652940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.642 [2024-05-15 04:25:23.652960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.899 [2024-05-15 04:25:23.668079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.899 [2024-05-15 04:25:23.668110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.668128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.683458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.683491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.683510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.698687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.698721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.698740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.713984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.714017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.714035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.729436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.729471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.729490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.745162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.745197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.745216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.760372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.760406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.760431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.775591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.775625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.775643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.790427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.790455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.790470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.805144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.805173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.805188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.820139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.820173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.820191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.835305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.835338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.835356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.850508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.850540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.850558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.865911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.865955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.865974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.881284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.881317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.881335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.897130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.897167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.897187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.900 [2024-05-15 04:25:23.912390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:35.900 [2024-05-15 04:25:23.912421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.900 [2024-05-15 04:25:23.912439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.170 [2024-05-15 04:25:23.927542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.170 [2024-05-15 04:25:23.927575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.170 [2024-05-15 04:25:23.927592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.170 [2024-05-15 04:25:23.943080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.170 [2024-05-15 04:25:23.943112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.172 [2024-05-15 04:25:23.943130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.172 [2024-05-15 04:25:23.958260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.172 [2024-05-15 04:25:23.958291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.172 [2024-05-15 04:25:23.958309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.172 [2024-05-15 04:25:23.973710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.172 [2024-05-15 04:25:23.973741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.172 [2024-05-15 04:25:23.973759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.172 [2024-05-15 04:25:23.989088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.172 [2024-05-15 04:25:23.989119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.172 [2024-05-15 04:25:23.989137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.172 [2024-05-15 04:25:24.004327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.172 [2024-05-15 04:25:24.004359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.172 [2024-05-15 04:25:24.004377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.172 [2024-05-15 04:25:24.019588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.172 [2024-05-15 04:25:24.019620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.173 [2024-05-15 04:25:24.019637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.173 [2024-05-15 04:25:24.035114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.173 [2024-05-15 04:25:24.035147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.173 [2024-05-15 04:25:24.035166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.173 [2024-05-15 04:25:24.050274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.173 [2024-05-15 04:25:24.050306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.173 [2024-05-15 04:25:24.050324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.173 [2024-05-15 04:25:24.065638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.173 [2024-05-15 04:25:24.065670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.173 [2024-05-15 04:25:24.065688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.173 [2024-05-15 04:25:24.081051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.173 [2024-05-15 04:25:24.081082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.173 [2024-05-15 04:25:24.081100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.173 [2024-05-15 04:25:24.096306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.173 [2024-05-15 04:25:24.096338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.173 [2024-05-15 04:25:24.096355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.173 [2024-05-15 04:25:24.111666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.174 [2024-05-15 04:25:24.111697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.174 [2024-05-15 04:25:24.111715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.174 [2024-05-15 04:25:24.127167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.174 [2024-05-15 04:25:24.127198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.174 [2024-05-15 04:25:24.127216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.174 [2024-05-15 04:25:24.142479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfb11c0) 00:24:36.174 [2024-05-15 04:25:24.142511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.174 [2024-05-15 04:25:24.142529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.174 00:24:36.174 Latency(us) 00:24:36.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.174 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:36.174 nvme0n1 : 2.00 2029.42 253.68 0.00 0.00 7875.69 7184.69 15922.82 00:24:36.174 =================================================================================================================== 00:24:36.174 Total : 2029.42 253.68 0.00 0.00 7875.69 7184.69 15922.82 00:24:36.174 0 00:24:36.174 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:36.174 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:36.174 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:36.174 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:36.174 | .driver_specific 00:24:36.174 | .nvme_error 00:24:36.174 | .status_code 00:24:36.174 | .command_transient_transport_error' 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3483635 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3483635 ']' 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3483635 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3483635 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3483635' 00:24:36.437 killing process with pid 3483635 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3483635 00:24:36.437 Received shutdown signal, test time was about 2.000000 seconds 00:24:36.437 00:24:36.437 Latency(us) 00:24:36.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.437 =================================================================================================================== 00:24:36.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.437 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3483635 00:24:37.003 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:37.003 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:37.003 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:37.003 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:37.003 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:37.003 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3484174 00:24:37.003 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:37.004 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3484174 /var/tmp/bperf.sock 00:24:37.004 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3484174 ']' 00:24:37.004 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:37.004 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:37.004 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:37.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:37.004 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:37.004 04:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:37.004 [2024-05-15 04:25:24.760175] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:37.004 [2024-05-15 04:25:24.760257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484174 ] 00:24:37.004 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.004 [2024-05-15 04:25:24.833824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.004 [2024-05-15 04:25:24.953426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.263 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:37.263 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:37.263 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:37.263 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:37.521 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:37.521 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.521 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:37.521 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.521 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.521 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.779 nvme0n1 00:24:37.779 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:37.779 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.779 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:37.779 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.779 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:37.779 04:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:38.037 Running I/O for 2 seconds... 00:24:38.037 [2024-05-15 04:25:25.898706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.037 [2024-05-15 04:25:25.899061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.037 [2024-05-15 04:25:25.899108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.037 [2024-05-15 04:25:25.912576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.037 [2024-05-15 04:25:25.912887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.037 [2024-05-15 04:25:25.912926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.037 [2024-05-15 04:25:25.926432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.037 [2024-05-15 04:25:25.926737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.037 [2024-05-15 04:25:25.926770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.037 [2024-05-15 04:25:25.940292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.037 [2024-05-15 04:25:25.940598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:25.940630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:25.954127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:25.954475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:25.954506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:25.967943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:25.968283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:25.968314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:25.981683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:25.982017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:25.982048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:25.995405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:25.995704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:25.995735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:26.009119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:26.009450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:26.009480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:26.022874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:26.023214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:26.023245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:26.036547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:26.036850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:26.036887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.038 [2024-05-15 04:25:26.050346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.038 [2024-05-15 04:25:26.050680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.038 [2024-05-15 04:25:26.050711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.064168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.064514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.064544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.077897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.078238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.078269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.091660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.091998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.092029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.105349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.105681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.105711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.119059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.119390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.119420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.132776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.133131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.133161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.146492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.146821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.146853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.160232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.160569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.160599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.174029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.174361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.174392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.187706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.188039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.188070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.201395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.201734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.201763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.215117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.215413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.215444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.228820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.229135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.229165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.242539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.296 [2024-05-15 04:25:26.242871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.296 [2024-05-15 04:25:26.242901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.296 [2024-05-15 04:25:26.256247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.297 [2024-05-15 04:25:26.256544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.297 [2024-05-15 04:25:26.256575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.297 [2024-05-15 04:25:26.269923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.297 [2024-05-15 04:25:26.270274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.297 [2024-05-15 04:25:26.270304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.297 [2024-05-15 04:25:26.283601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.297 [2024-05-15 04:25:26.283900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.297 [2024-05-15 04:25:26.283937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.297 [2024-05-15 04:25:26.297286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.297 [2024-05-15 04:25:26.297584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.297 [2024-05-15 04:25:26.297613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.297 [2024-05-15 04:25:26.311013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.311354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.311384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.324743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.325091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.325121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.338430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.338726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.338756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.352196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.352527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.352558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.365885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.366228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.366258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.379588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.379887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.379917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.393242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.393582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.393619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.406899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.407234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.407264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.420610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.420947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.420978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.434385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.434691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.434723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.448044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.448374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.448404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.461705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.462037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.462067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.475360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.475659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.475689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.489018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.489348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.489377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.502690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.503019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.503049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.516397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.516742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.516772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.530114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.530446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.530476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.543794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.544133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.544164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.555 [2024-05-15 04:25:26.557447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.555 [2024-05-15 04:25:26.557745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.555 [2024-05-15 04:25:26.557774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.571143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.571444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.571473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.584849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.585188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.585218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.598461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.598760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.598791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.612128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.612452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.612482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.625778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.626121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.626151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.639421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.639749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.639778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.653040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.653373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.653403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.666682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.667025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.667055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.680309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.680648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.680678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.694010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.694340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.694370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.707622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.707924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.707963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.721271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.721609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.721639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.734901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.735241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.735273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.748528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.748858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.748888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.762157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.762496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.762525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.775952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.776283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.776313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.789584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.789882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.789911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.803200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.803527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.803557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.814 [2024-05-15 04:25:26.816793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:38.814 [2024-05-15 04:25:26.817104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.814 [2024-05-15 04:25:26.817135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.830531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.830833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.830863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.844226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.844526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.844555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.857849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.858186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.858216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.871513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.871820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.885110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.885413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.885443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.898748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.899048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.899077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.912392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.912686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.912714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.926065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.926394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.926424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.939680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.072 [2024-05-15 04:25:26.940022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.072 [2024-05-15 04:25:26.940051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.072 [2024-05-15 04:25:26.953403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:26.953703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:26.953734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:26.967025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:26.967365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:26.967396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:26.980648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:26.980976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:26.981006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:26.994305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:26.994611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:26.994641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:27.007893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:27.008204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:27.008235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:27.021545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:27.021884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:27.021914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:27.035194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:27.035534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:27.035565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:27.048825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:27.049166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:27.049195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:27.062457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:27.062755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:27.062791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.073 [2024-05-15 04:25:27.076119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.073 [2024-05-15 04:25:27.076457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.073 [2024-05-15 04:25:27.076488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.089827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.090191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.090221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.103509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.103838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.103868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.117193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.117491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.117522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.130841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.131148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.131179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.144494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.144823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.144853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.158160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.158462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.158492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.171821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.172164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.172194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.185477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.185773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.185802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.199123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.199460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.199490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.212815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.213127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.213157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.226455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.226755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.226790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.240068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.240371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.240401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.253733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.254039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.254069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.267389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.267728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.267758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.281019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.281346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.281376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.294648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.294991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.295020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.308294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.308630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.308661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.321959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.322316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.322345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.332 [2024-05-15 04:25:27.335597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.332 [2024-05-15 04:25:27.335939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.332 [2024-05-15 04:25:27.335970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.349299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.349640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.349670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.362974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.363328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.363358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.376710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.377046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.377078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.390380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.390677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.390706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.404039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.404386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.404415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.417666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.417964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.418002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.431284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.431584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.431614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.444939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.445268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.445298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.458575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.458874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.458905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.472279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.472621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.472650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.485900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.486248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.486277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.499516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.499846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.499875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.513143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.513476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.513505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.526848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.527189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.540497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.540797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.540826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.554168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.554499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.554529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.567863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.568202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.568232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.581618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.581916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.581954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.591 [2024-05-15 04:25:27.595309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.591 [2024-05-15 04:25:27.595652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.591 [2024-05-15 04:25:27.595682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.609086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.609413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.609443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.622793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.623130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.623160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.636481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.636780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.636810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.650192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.650520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.650550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.663884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.664229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.664259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.677587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.677887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.677917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.691284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.691583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.691613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.704974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.705312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.705348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.718630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.718961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.718991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.732465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.732767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.732796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.746138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.746467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.746497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.759789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.760128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.760158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.773481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.773811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.773841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.787316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.787618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.787649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.800996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.801327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.801357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.814672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.815011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.815041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.828332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.828644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.828675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.842057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.842387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.842418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:39.850 [2024-05-15 04:25:27.855752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:39.850 [2024-05-15 04:25:27.856055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.850 [2024-05-15 04:25:27.856086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:40.108 [2024-05-15 04:25:27.869548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:40.109 [2024-05-15 04:25:27.869846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.109 [2024-05-15 04:25:27.869877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:40.109 [2024-05-15 04:25:27.883196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481250) with pdu=0x2000190fda78 00:24:40.109 [2024-05-15 04:25:27.883496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.109 [2024-05-15 04:25:27.883527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:40.109 00:24:40.109 Latency(us) 00:24:40.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.109 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:40.109 nvme0n1 : 2.01 18618.35 72.73 0.00 0.00 6858.19 6359.42 13981.01 00:24:40.109 =================================================================================================================== 00:24:40.109 Total : 18618.35 72.73 0.00 0.00 6858.19 6359.42 13981.01 00:24:40.109 0 00:24:40.109 04:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:40.109 04:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:40.109 04:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:40.109 04:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:40.109 | .driver_specific 00:24:40.109 | .nvme_error 00:24:40.109 | .status_code 00:24:40.109 | .command_transient_transport_error' 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3484174 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3484174 ']' 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3484174 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3484174 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3484174' 00:24:40.367 killing process with pid 3484174 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3484174 00:24:40.367 Received shutdown signal, test time was about 2.000000 seconds 00:24:40.367 00:24:40.367 Latency(us) 00:24:40.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.367 =================================================================================================================== 00:24:40.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.367 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3484174 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3484696 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3484696 /var/tmp/bperf.sock 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3484696 ']' 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:40.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.626 04:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:40.626 [2024-05-15 04:25:28.503913] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:40.626 [2024-05-15 04:25:28.504009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484696 ] 00:24:40.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:40.626 Zero copy mechanism will not be used. 00:24:40.626 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.626 [2024-05-15 04:25:28.579159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.884 [2024-05-15 04:25:28.700439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.816 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:41.817 04:25:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.383 nvme0n1 00:24:42.383 04:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:42.383 04:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.383 04:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.383 04:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.383 04:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:42.383 04:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:42.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:42.383 Zero copy mechanism will not be used. 00:24:42.383 Running I/O for 2 seconds... 00:24:42.383 [2024-05-15 04:25:30.342484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.383 [2024-05-15 04:25:30.342912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.383 [2024-05-15 04:25:30.342960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.383 [2024-05-15 04:25:30.363756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.383 [2024-05-15 04:25:30.364296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.383 [2024-05-15 04:25:30.364342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.383 [2024-05-15 04:25:30.387319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.383 [2024-05-15 04:25:30.387971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.383 [2024-05-15 04:25:30.388001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.411223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.412030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.412062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.435453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.435890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.435948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.457059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.457734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.457778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.480216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.480693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.480739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.503165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.503556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.503601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.526128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.526696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.526725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.548559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.549180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.549210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.572766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.573365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.573396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.594628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.595358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.595401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.621267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.621840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.621870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.642 [2024-05-15 04:25:30.643773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.642 [2024-05-15 04:25:30.644305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-05-15 04:25:30.644357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.665879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.666504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.666534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.690969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.691540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.691570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.715541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.716109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.716140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.739656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.740332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.740360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.761611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.762044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.762072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.784445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.784886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.784939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.808635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.809121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.809165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.829052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.829483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.829511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.852409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.852952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.853000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.878318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.878877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.878905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.912 [2024-05-15 04:25:30.902734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:42.912 [2024-05-15 04:25:30.903229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.912 [2024-05-15 04:25:30.903275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:30.926565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:30.927236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:30.927268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:30.950391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:30.951006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:30.951038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:30.974994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:30.975549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:30.975577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:30.999713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.000384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.000412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:31.022611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.023202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.023245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:31.050569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.051132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.051165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:31.075342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.076104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.076132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:31.100698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.101216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.101244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:31.123208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.123613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.123641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:31.149107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.149756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.149784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.181 [2024-05-15 04:25:31.174986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.181 [2024-05-15 04:25:31.175444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.181 [2024-05-15 04:25:31.175488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.198994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.439 [2024-05-15 04:25:31.199728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.439 [2024-05-15 04:25:31.199756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.223583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.439 [2024-05-15 04:25:31.224143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.439 [2024-05-15 04:25:31.224172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.250561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.439 [2024-05-15 04:25:31.251293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.439 [2024-05-15 04:25:31.251320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.276265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.439 [2024-05-15 04:25:31.276936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.439 [2024-05-15 04:25:31.276979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.300370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.439 [2024-05-15 04:25:31.300977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.439 [2024-05-15 04:25:31.301020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.323819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.439 [2024-05-15 04:25:31.324512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.439 [2024-05-15 04:25:31.324541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.348708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.439 [2024-05-15 04:25:31.349323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.439 [2024-05-15 04:25:31.349352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.439 [2024-05-15 04:25:31.371920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.440 [2024-05-15 04:25:31.372357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.440 [2024-05-15 04:25:31.372384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.440 [2024-05-15 04:25:31.396467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.440 [2024-05-15 04:25:31.397128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.440 [2024-05-15 04:25:31.397173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.440 [2024-05-15 04:25:31.421410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.440 [2024-05-15 04:25:31.421936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.440 [2024-05-15 04:25:31.421963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.440 [2024-05-15 04:25:31.444541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.440 [2024-05-15 04:25:31.445336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.440 [2024-05-15 04:25:31.445364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.468304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.468690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.468717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.491973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.492618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.492648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.514443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.514972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.515000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.538847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.539422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.539450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.564213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.564688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.564716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.586328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.586873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.586900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.611007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.611687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.611714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.635749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.636427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.636456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.659538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.660052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.683922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.684412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.684445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.698 [2024-05-15 04:25:31.707894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.698 [2024-05-15 04:25:31.708420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.698 [2024-05-15 04:25:31.708462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.732487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.733082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.733111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.755528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.756050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.756079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.781946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.782573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.782600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.805314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.805990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.806033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.830240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.830893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.830920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.854307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.854984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.855027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.876130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.876774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.876802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.901138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.901549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.901576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.926641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.927330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.927382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.957 [2024-05-15 04:25:31.951461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:43.957 [2024-05-15 04:25:31.952191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.957 [2024-05-15 04:25:31.952225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:31.976302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:31.976881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:31.976925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.001253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.001913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.001948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.026042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.026533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.026561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.051687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.052189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.052231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.076820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.077431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.077473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.100342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.100773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.100800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.124327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.125002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.125045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.148857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.149368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.149395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.173698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.174264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.174293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.196275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.196698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.196727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-05-15 04:25:32.222060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.216 [2024-05-15 04:25:32.222623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-05-15 04:25:32.222650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.474 [2024-05-15 04:25:32.247083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.474 [2024-05-15 04:25:32.247555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.474 [2024-05-15 04:25:32.247582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.474 [2024-05-15 04:25:32.272237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.474 [2024-05-15 04:25:32.272900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.474 [2024-05-15 04:25:32.272927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.474 [2024-05-15 04:25:32.297455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.474 [2024-05-15 04:25:32.297865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.474 [2024-05-15 04:25:32.297893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.474 [2024-05-15 04:25:32.319175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1481600) with pdu=0x2000190fef90 00:24:44.474 [2024-05-15 04:25:32.319553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.474 [2024-05-15 04:25:32.319599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.474 00:24:44.474 Latency(us) 00:24:44.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.474 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:44.474 nvme0n1 : 2.01 1289.18 161.15 0.00 0.00 12373.55 6456.51 28350.39 00:24:44.474 =================================================================================================================== 00:24:44.474 Total : 1289.18 161.15 0.00 0.00 12373.55 6456.51 28350.39 00:24:44.474 0 00:24:44.474 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:44.474 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:44.474 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:44.474 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:44.474 | .driver_specific 00:24:44.474 | .nvme_error 00:24:44.474 | .status_code 00:24:44.474 | .command_transient_transport_error' 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 83 > 0 )) 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3484696 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3484696 ']' 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3484696 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3484696 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3484696' 00:24:44.733 killing process with pid 3484696 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3484696 00:24:44.733 Received shutdown signal, test time was about 2.000000 seconds 00:24:44.733 00:24:44.733 Latency(us) 00:24:44.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.733 =================================================================================================================== 00:24:44.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.733 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3484696 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3482965 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3482965 ']' 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3482965 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3482965 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3482965' 00:24:44.990 killing process with pid 3482965 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3482965 00:24:44.990 [2024-05-15 04:25:32.922310] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:44.990 04:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3482965 00:24:45.250 00:24:45.250 real 0m18.521s 00:24:45.250 user 0m37.333s 00:24:45.250 sys 0m4.030s 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:45.250 ************************************ 00:24:45.250 END TEST nvmf_digest_error 00:24:45.250 ************************************ 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.250 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.250 rmmod nvme_tcp 00:24:45.250 rmmod nvme_fabrics 00:24:45.250 rmmod nvme_keyring 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3482965 ']' 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3482965 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3482965 ']' 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3482965 00:24:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3482965) - No such process 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3482965 is not found' 00:24:45.511 Process with pid 3482965 is not found 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.511 04:25:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.412 04:25:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.412 00:24:47.412 real 0m40.572s 00:24:47.412 user 1m12.984s 00:24:47.412 sys 0m9.988s 00:24:47.412 04:25:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:47.412 04:25:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:47.412 ************************************ 00:24:47.412 END TEST nvmf_digest 00:24:47.412 ************************************ 00:24:47.412 04:25:35 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:24:47.412 04:25:35 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:24:47.412 04:25:35 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:24:47.412 04:25:35 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:47.412 04:25:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:47.412 04:25:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:47.412 04:25:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.412 ************************************ 00:24:47.412 START TEST nvmf_bdevperf 00:24:47.412 ************************************ 00:24:47.412 04:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:47.412 * Looking for test storage... 00:24:47.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.672 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.673 04:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:50.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:50.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:50.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:50.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.204 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.205 04:25:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:24:50.205 00:24:50.205 --- 10.0.0.2 ping statistics --- 00:24:50.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.205 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:24:50.205 00:24:50.205 --- 10.0.0.1 ping statistics --- 00:24:50.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.205 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3487478 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3487478 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3487478 ']' 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:50.205 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.205 [2024-05-15 04:25:38.114331] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:50.205 [2024-05-15 04:25:38.114416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.205 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.205 [2024-05-15 04:25:38.189515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:50.472 [2024-05-15 04:25:38.299910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.472 [2024-05-15 04:25:38.299980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.472 [2024-05-15 04:25:38.299993] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.472 [2024-05-15 04:25:38.300020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.472 [2024-05-15 04:25:38.300030] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.472 [2024-05-15 04:25:38.300116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.472 [2024-05-15 04:25:38.300144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.472 [2024-05-15 04:25:38.300146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.472 [2024-05-15 04:25:38.448552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.472 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.731 Malloc0 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.731 [2024-05-15 04:25:38.510195] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:50.731 [2024-05-15 04:25:38.510484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:50.731 { 00:24:50.731 "params": { 00:24:50.731 "name": "Nvme$subsystem", 00:24:50.731 "trtype": "$TEST_TRANSPORT", 00:24:50.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:50.731 "adrfam": "ipv4", 00:24:50.731 "trsvcid": "$NVMF_PORT", 00:24:50.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:50.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:50.731 "hdgst": ${hdgst:-false}, 00:24:50.731 "ddgst": ${ddgst:-false} 00:24:50.731 }, 00:24:50.731 "method": "bdev_nvme_attach_controller" 00:24:50.731 } 00:24:50.731 EOF 00:24:50.731 )") 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:50.731 04:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:50.731 "params": { 00:24:50.731 "name": "Nvme1", 00:24:50.731 "trtype": "tcp", 00:24:50.731 "traddr": "10.0.0.2", 00:24:50.731 "adrfam": "ipv4", 00:24:50.731 "trsvcid": "4420", 00:24:50.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:50.731 "hdgst": false, 00:24:50.731 "ddgst": false 00:24:50.731 }, 00:24:50.731 "method": "bdev_nvme_attach_controller" 00:24:50.731 }' 00:24:50.731 [2024-05-15 04:25:38.561502] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:50.731 [2024-05-15 04:25:38.561588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487519 ] 00:24:50.731 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.731 [2024-05-15 04:25:38.636442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.990 [2024-05-15 04:25:38.752456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.249 Running I/O for 1 seconds... 00:24:52.182 00:24:52.182 Latency(us) 00:24:52.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.182 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:52.182 Verification LBA range: start 0x0 length 0x4000 00:24:52.182 Nvme1n1 : 1.01 8612.16 33.64 0.00 0.00 14767.13 1626.26 13786.83 00:24:52.182 =================================================================================================================== 00:24:52.182 Total : 8612.16 33.64 0.00 0.00 14767.13 1626.26 13786.83 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3487765 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:52.440 { 00:24:52.440 "params": { 00:24:52.440 "name": "Nvme$subsystem", 00:24:52.440 "trtype": "$TEST_TRANSPORT", 00:24:52.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.440 "adrfam": "ipv4", 00:24:52.440 "trsvcid": "$NVMF_PORT", 00:24:52.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.440 "hdgst": ${hdgst:-false}, 00:24:52.440 "ddgst": ${ddgst:-false} 00:24:52.440 }, 00:24:52.440 "method": "bdev_nvme_attach_controller" 00:24:52.440 } 00:24:52.440 EOF 00:24:52.440 )") 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:52.440 04:25:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:52.440 "params": { 00:24:52.440 "name": "Nvme1", 00:24:52.440 "trtype": "tcp", 00:24:52.440 "traddr": "10.0.0.2", 00:24:52.441 "adrfam": "ipv4", 00:24:52.441 "trsvcid": "4420", 00:24:52.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.441 "hdgst": false, 00:24:52.441 "ddgst": false 00:24:52.441 }, 00:24:52.441 "method": "bdev_nvme_attach_controller" 00:24:52.441 }' 00:24:52.441 [2024-05-15 04:25:40.398894] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:52.441 [2024-05-15 04:25:40.399002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487765 ] 00:24:52.441 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.698 [2024-05-15 04:25:40.471440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.698 [2024-05-15 04:25:40.581535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.955 Running I/O for 15 seconds... 00:24:55.487 04:25:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3487478 00:24:55.487 04:25:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:55.487 [2024-05-15 04:25:43.373626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.373720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.373758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.373801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.373836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.373871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.373904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.373946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.373963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.374004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.374018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.374034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.374050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.374065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.374079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.374094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.487 [2024-05-15 04:25:43.374108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.487 [2024-05-15 04:25:43.374122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.374968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.374998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.488 [2024-05-15 04:25:43.375346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.488 [2024-05-15 04:25:43.375363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.375955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.375972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.489 [2024-05-15 04:25:43.376604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.489 [2024-05-15 04:25:43.376620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.489 [2024-05-15 04:25:43.376635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.376952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.376993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.490 [2024-05-15 04:25:43.377361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.490 [2024-05-15 04:25:43.377868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.490 [2024-05-15 04:25:43.377884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e16770 is same with the state(5) to be set 00:24:55.490 [2024-05-15 04:25:43.377905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.490 [2024-05-15 04:25:43.377917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.490 [2024-05-15 04:25:43.377937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45584 len:8 PRP1 0x0 PRP2 0x0 00:24:55.490 [2024-05-15 04:25:43.377960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.491 [2024-05-15 04:25:43.378041] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e16770 was disconnected and freed. reset controller. 00:24:55.491 [2024-05-15 04:25:43.381934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.382021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.382836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.383050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.383077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.383094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.383347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.383595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.383618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.383638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.387310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.396363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.396864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.397127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.397154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.397170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.397425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.397672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.397695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.397710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.401350] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.410360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.410867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.411125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.411154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.411172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.411414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.411661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.411684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.411700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.415343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.424351] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.424878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.425108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.425137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.425155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.425396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.425642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.425666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.425680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.429324] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.438401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.438864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.439083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.439112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.439130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.439372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.439617] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.439640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.439655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.443304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.452340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.452882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.453132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.453164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.453181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.453423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.453669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.453692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.453707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.457352] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.466363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.466904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.467140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.467166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.467182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.467449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.467695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.467718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.467733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.471378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.480396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.480944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.481144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.481173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.481191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.481432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.481678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.481701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.481715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.485359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.491 [2024-05-15 04:25:43.494385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.491 [2024-05-15 04:25:43.494868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.495117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.491 [2024-05-15 04:25:43.495143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.491 [2024-05-15 04:25:43.495159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.491 [2024-05-15 04:25:43.495409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.491 [2024-05-15 04:25:43.495655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.491 [2024-05-15 04:25:43.495678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.491 [2024-05-15 04:25:43.495693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.491 [2024-05-15 04:25:43.499364] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.750 [2024-05-15 04:25:43.508420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.750 [2024-05-15 04:25:43.508877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.509126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.509152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.750 [2024-05-15 04:25:43.509168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.750 [2024-05-15 04:25:43.509433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.750 [2024-05-15 04:25:43.509687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.750 [2024-05-15 04:25:43.509712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.750 [2024-05-15 04:25:43.509727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.750 [2024-05-15 04:25:43.513368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.750 [2024-05-15 04:25:43.522377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.750 [2024-05-15 04:25:43.522844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.523061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.523089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.750 [2024-05-15 04:25:43.523107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.750 [2024-05-15 04:25:43.523348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.750 [2024-05-15 04:25:43.523594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.750 [2024-05-15 04:25:43.523617] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.750 [2024-05-15 04:25:43.523632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.750 [2024-05-15 04:25:43.527275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.750 [2024-05-15 04:25:43.536286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.750 [2024-05-15 04:25:43.536791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.537058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.537087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.750 [2024-05-15 04:25:43.537103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.750 [2024-05-15 04:25:43.537345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.750 [2024-05-15 04:25:43.537591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.750 [2024-05-15 04:25:43.537614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.750 [2024-05-15 04:25:43.537629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.750 [2024-05-15 04:25:43.541271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.750 [2024-05-15 04:25:43.550278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.750 [2024-05-15 04:25:43.550748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.550943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.750 [2024-05-15 04:25:43.550973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.550990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.551242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.551488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.551511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.551526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.555170] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.564177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.564636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.564912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.564952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.564971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.565213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.565459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.565482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.565497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.569137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.578154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.578656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.578870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.578898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.578916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.579164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.579411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.579435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.579450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.583099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.592187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.592715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.592895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.592919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.592948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.593169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.593426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.593449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.593464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.597152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.606203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.606734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.607027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.607053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.607068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.607311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.607557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.607580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.607594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.611235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.620228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.620742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.621006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.621033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.621048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.621302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.621548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.621571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.621586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.625224] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.634378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.634883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.635134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.635162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.635180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.635427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.635673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.635696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.635711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.639409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.648131] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.648555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.648757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.648783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.648798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.649056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.649279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.649300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.649314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.652582] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.661588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.662058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.662260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.662285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.662300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.662541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.662747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.662767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.662780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.665962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.674907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.675347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.675587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.675627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.675642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.675877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.676132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.676154] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.676167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.679201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.688286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.688810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.689011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.689037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.689052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.689296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.689513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.689532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.689544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.692575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.701608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.702079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.702336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.702361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.751 [2024-05-15 04:25:43.702376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.751 [2024-05-15 04:25:43.702611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.751 [2024-05-15 04:25:43.702813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.751 [2024-05-15 04:25:43.702832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.751 [2024-05-15 04:25:43.702843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.751 [2024-05-15 04:25:43.705909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.751 [2024-05-15 04:25:43.714996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.751 [2024-05-15 04:25:43.715455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.751 [2024-05-15 04:25:43.715723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-05-15 04:25:43.715748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.752 [2024-05-15 04:25:43.715763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.752 [2024-05-15 04:25:43.716028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.752 [2024-05-15 04:25:43.716258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.752 [2024-05-15 04:25:43.716283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.752 [2024-05-15 04:25:43.716311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.752 [2024-05-15 04:25:43.719339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.752 [2024-05-15 04:25:43.728297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.752 [2024-05-15 04:25:43.728774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-05-15 04:25:43.728983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-05-15 04:25:43.729009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.752 [2024-05-15 04:25:43.729025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.752 [2024-05-15 04:25:43.729259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.752 [2024-05-15 04:25:43.729477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.752 [2024-05-15 04:25:43.729495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.752 [2024-05-15 04:25:43.729507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.752 [2024-05-15 04:25:43.732532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.752 [2024-05-15 04:25:43.741685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.752 [2024-05-15 04:25:43.742113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-05-15 04:25:43.742346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-05-15 04:25:43.742371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.752 [2024-05-15 04:25:43.742386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.752 [2024-05-15 04:25:43.742643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.752 [2024-05-15 04:25:43.742844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.752 [2024-05-15 04:25:43.742863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.752 [2024-05-15 04:25:43.742875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.752 [2024-05-15 04:25:43.745987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.752 [2024-05-15 04:25:43.755106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.752 [2024-05-15 04:25:43.755630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-05-15 04:25:43.755847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.752 [2024-05-15 04:25:43.755872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:55.752 [2024-05-15 04:25:43.755888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:55.752 [2024-05-15 04:25:43.756158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:55.752 [2024-05-15 04:25:43.756395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.752 [2024-05-15 04:25:43.756415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.752 [2024-05-15 04:25:43.756431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.752 [2024-05-15 04:25:43.759459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.011 [2024-05-15 04:25:43.768414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.011 [2024-05-15 04:25:43.768839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.011 [2024-05-15 04:25:43.769030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.011 [2024-05-15 04:25:43.769056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.011 [2024-05-15 04:25:43.769072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.011 [2024-05-15 04:25:43.769307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.011 [2024-05-15 04:25:43.769540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.011 [2024-05-15 04:25:43.769561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.011 [2024-05-15 04:25:43.769574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.011 [2024-05-15 04:25:43.772713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.011 [2024-05-15 04:25:43.781828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.011 [2024-05-15 04:25:43.782291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.011 [2024-05-15 04:25:43.782472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.011 [2024-05-15 04:25:43.782497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.011 [2024-05-15 04:25:43.782512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.011 [2024-05-15 04:25:43.782769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.011 [2024-05-15 04:25:43.782996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.783031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.783044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.786202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.795098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.795631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.795854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.795879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.795894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.796122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.796366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.796386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.796398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.799474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.808503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.808974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.809195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.809220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.809234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.809456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.809674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.809693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.809705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.812786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.821788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.822293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.822485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.822510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.822526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.822765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.823012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.823033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.823046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.826100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.835335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.835809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.835997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.836025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.836041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.836283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.836505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.836524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.836536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.839576] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.848718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.849248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.849458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.849482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.849497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.849746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.849973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.849994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.850006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.853060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.862027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.862457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.862655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.862680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.862696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.862962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.863177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.863198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.863210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.866280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.875485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.876003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.876180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.876205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.876220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.876479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.876680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.876699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.876711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.879951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.888981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.889558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.889810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.889835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.889850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.890125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.890370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.890390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.890402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.893433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.902312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.902843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.903076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.903102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.012 [2024-05-15 04:25:43.903117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.012 [2024-05-15 04:25:43.903376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.012 [2024-05-15 04:25:43.903577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.012 [2024-05-15 04:25:43.903596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.012 [2024-05-15 04:25:43.903608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.012 [2024-05-15 04:25:43.906633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.012 [2024-05-15 04:25:43.915608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.012 [2024-05-15 04:25:43.916039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.916245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.012 [2024-05-15 04:25:43.916270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:43.916285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:43.916540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:43.916741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:43.916760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:43.916772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:43.919862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:43.928983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:43.929693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.929948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.929981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:43.929998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:43.930240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:43.930460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:43.930480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:43.930492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:43.933521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:43.942281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:43.942716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.942915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.942948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:43.942965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:43.943197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:43.943417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:43.943436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:43.943448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:43.946514] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:43.955550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:43.956014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.956252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.956277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:43.956292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:43.956544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:43.956745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:43.956764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:43.956776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:43.959839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:43.968945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:43.969359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.969528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.969553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:43.969573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:43.969810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:43.970059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:43.970081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:43.970093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:43.973140] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:43.982216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:43.982698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.982945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.982971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:43.982987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:43.983218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:43.983456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:43.983475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:43.983487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:43.986512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:43.995496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:43.996019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.996216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:43.996241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:43.996257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:43.996507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:43.996708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:43.996727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:43.996739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:43.999870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:44.008819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:44.009341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:44.009558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:44.009583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:44.009598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:44.009856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:44.010106] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:44.010127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:44.010140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.013 [2024-05-15 04:25:44.013189] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.013 [2024-05-15 04:25:44.022321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.013 [2024-05-15 04:25:44.022811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:44.023036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.013 [2024-05-15 04:25:44.023064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.013 [2024-05-15 04:25:44.023080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.013 [2024-05-15 04:25:44.023317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.013 [2024-05-15 04:25:44.023550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.013 [2024-05-15 04:25:44.023571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.013 [2024-05-15 04:25:44.023585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.273 [2024-05-15 04:25:44.026957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.273 [2024-05-15 04:25:44.035709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.273 [2024-05-15 04:25:44.036194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.273 [2024-05-15 04:25:44.036400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.273 [2024-05-15 04:25:44.036427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.273 [2024-05-15 04:25:44.036443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.273 [2024-05-15 04:25:44.036697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.273 [2024-05-15 04:25:44.036906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.273 [2024-05-15 04:25:44.036926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.273 [2024-05-15 04:25:44.036964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.273 [2024-05-15 04:25:44.040020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.273 [2024-05-15 04:25:44.048989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.273 [2024-05-15 04:25:44.049451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.273 [2024-05-15 04:25:44.049658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.049684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.049700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.049963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.050199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.050220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.050233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.053294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.062383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.062907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.063107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.063133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.063148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.063403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.063605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.063624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.063636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.066702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.075790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.076210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.076428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.076454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.076468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.076703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.076905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.076947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.076961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.080035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.089137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.089780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.090072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.090101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.090117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.090367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.090584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.090612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.090625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.093654] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.102523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.103024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.103252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.103277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.103292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.103550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.103752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.103770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.103782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.106859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.115774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.116282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.116474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.116499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.116514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.116756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.116984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.117019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.117032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.120107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.129171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.129657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.129899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.129923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.129948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.130181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.130402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.130421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.130438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.133571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.142737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.143176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.143376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.143402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.143418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.143663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.143881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.143901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.143913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.147144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.156081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.156571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.156787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.156812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.156827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.157072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.157294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.157314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.157326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.160417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.169362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.169832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.170083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.170109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.170125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.170364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.170566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.170585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.170597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.173598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.182725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.183206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.183405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.183430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.183446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.274 [2024-05-15 04:25:44.183706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.274 [2024-05-15 04:25:44.183907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.274 [2024-05-15 04:25:44.183926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.274 [2024-05-15 04:25:44.183963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.274 [2024-05-15 04:25:44.187048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.274 [2024-05-15 04:25:44.196077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.274 [2024-05-15 04:25:44.196508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.196708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.274 [2024-05-15 04:25:44.196734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.274 [2024-05-15 04:25:44.196750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.275 [2024-05-15 04:25:44.196998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.275 [2024-05-15 04:25:44.197222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.275 [2024-05-15 04:25:44.197257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.275 [2024-05-15 04:25:44.197270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.275 [2024-05-15 04:25:44.200413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.275 [2024-05-15 04:25:44.209401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.275 [2024-05-15 04:25:44.209921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.210118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.210143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.275 [2024-05-15 04:25:44.210159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.275 [2024-05-15 04:25:44.210402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.275 [2024-05-15 04:25:44.210619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.275 [2024-05-15 04:25:44.210639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.275 [2024-05-15 04:25:44.210651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.275 [2024-05-15 04:25:44.213738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.275 [2024-05-15 04:25:44.222826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.275 [2024-05-15 04:25:44.223281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.223512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.223537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.275 [2024-05-15 04:25:44.223552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.275 [2024-05-15 04:25:44.223806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.275 [2024-05-15 04:25:44.224054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.275 [2024-05-15 04:25:44.224076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.275 [2024-05-15 04:25:44.224089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.275 [2024-05-15 04:25:44.227144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.275 [2024-05-15 04:25:44.236091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.275 [2024-05-15 04:25:44.236650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.236835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.236859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.275 [2024-05-15 04:25:44.236876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.275 [2024-05-15 04:25:44.237152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.275 [2024-05-15 04:25:44.237371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.275 [2024-05-15 04:25:44.237391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.275 [2024-05-15 04:25:44.237403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.275 [2024-05-15 04:25:44.240469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.275 [2024-05-15 04:25:44.249423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.275 [2024-05-15 04:25:44.249831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.250012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.250038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.275 [2024-05-15 04:25:44.250053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.275 [2024-05-15 04:25:44.250311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.275 [2024-05-15 04:25:44.250513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.275 [2024-05-15 04:25:44.250533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.275 [2024-05-15 04:25:44.250545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.275 [2024-05-15 04:25:44.253613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.275 [2024-05-15 04:25:44.262764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.275 [2024-05-15 04:25:44.263241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.263510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.263535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.275 [2024-05-15 04:25:44.263551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.275 [2024-05-15 04:25:44.263808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.275 [2024-05-15 04:25:44.264060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.275 [2024-05-15 04:25:44.264082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.275 [2024-05-15 04:25:44.264096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.275 [2024-05-15 04:25:44.267176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.275 [2024-05-15 04:25:44.276261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.275 [2024-05-15 04:25:44.276730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.276935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.275 [2024-05-15 04:25:44.276961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.275 [2024-05-15 04:25:44.276976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.275 [2024-05-15 04:25:44.277220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.275 [2024-05-15 04:25:44.277438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.275 [2024-05-15 04:25:44.277458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.275 [2024-05-15 04:25:44.277470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.275 [2024-05-15 04:25:44.280599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.535 [2024-05-15 04:25:44.289726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.535 [2024-05-15 04:25:44.290169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.535 [2024-05-15 04:25:44.290389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.535 [2024-05-15 04:25:44.290415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.535 [2024-05-15 04:25:44.290430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.535 [2024-05-15 04:25:44.290685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.535 [2024-05-15 04:25:44.290886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.535 [2024-05-15 04:25:44.290905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.535 [2024-05-15 04:25:44.290941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.535 [2024-05-15 04:25:44.294313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.535 [2024-05-15 04:25:44.303172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.535 [2024-05-15 04:25:44.303664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.535 [2024-05-15 04:25:44.303877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.535 [2024-05-15 04:25:44.303907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.535 [2024-05-15 04:25:44.303924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.535 [2024-05-15 04:25:44.304176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.535 [2024-05-15 04:25:44.304394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.535 [2024-05-15 04:25:44.304414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.535 [2024-05-15 04:25:44.304426] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.307465] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.316565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.317065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.317271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.317297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.317312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.317568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.317769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.317789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.317801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.320871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.329840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.330310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.330504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.330529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.330544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.330786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.331039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.331062] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.331076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.334159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.343119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.343636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.343822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.343847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.343868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.344123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.344343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.344363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.344375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.347473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.356423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.356879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.357066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.357091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.357107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.357367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.357569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.357588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.357599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.360624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.369746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.370164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.370394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.370419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.370434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.370687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.370888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.370922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.370943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.374012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.383198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.383625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.383828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.383853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.383868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.384113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.384356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.384376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.384388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.387529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.396729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.397185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.397389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.397415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.397431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.397676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.397898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.397917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.397937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.401214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.410100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.410640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.410829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.410855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.410870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.411106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.411346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.411365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.411377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.414435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.423411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.423836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.424140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.424166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.424182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.536 [2024-05-15 04:25:44.424424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.536 [2024-05-15 04:25:44.424631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.536 [2024-05-15 04:25:44.424650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.536 [2024-05-15 04:25:44.424663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.536 [2024-05-15 04:25:44.427727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.536 [2024-05-15 04:25:44.436689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.536 [2024-05-15 04:25:44.437184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.437386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.536 [2024-05-15 04:25:44.437411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.536 [2024-05-15 04:25:44.437426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.437676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.437878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.437897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.437924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.441044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.450034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.450479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.450705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.450730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.450745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.451013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.451236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.451274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.451287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.454331] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.463419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.463873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.464073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.464100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.464116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.464355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.464556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.464580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.464593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.467631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.476815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.477313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.477500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.477525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.477540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.477794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.478021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.478042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.478055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.481121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.490273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.490741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.490953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.490978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.490994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.491223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.491458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.491478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.491490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.494482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.503704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.504140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.504319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.504346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.504361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.504606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.504808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.504827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.504845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.507833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.517029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.517489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.517715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.517739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.517753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.518001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.518225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.518259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.518272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.521339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.530446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.530906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.531150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.531175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.531191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.531448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.531649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.531668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.531680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.534755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.537 [2024-05-15 04:25:44.543728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.537 [2024-05-15 04:25:44.544169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.544369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.537 [2024-05-15 04:25:44.544394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.537 [2024-05-15 04:25:44.544409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.537 [2024-05-15 04:25:44.544649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.537 [2024-05-15 04:25:44.544887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.537 [2024-05-15 04:25:44.544926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.537 [2024-05-15 04:25:44.544952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.537 [2024-05-15 04:25:44.548388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.797 [2024-05-15 04:25:44.557817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.797 [2024-05-15 04:25:44.558259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.558481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.558508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.797 [2024-05-15 04:25:44.558525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.797 [2024-05-15 04:25:44.558766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.797 [2024-05-15 04:25:44.559022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.797 [2024-05-15 04:25:44.559046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.797 [2024-05-15 04:25:44.559061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.797 [2024-05-15 04:25:44.562690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.797 [2024-05-15 04:25:44.571905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.797 [2024-05-15 04:25:44.572408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.572667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.572695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.797 [2024-05-15 04:25:44.572712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.797 [2024-05-15 04:25:44.572962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.797 [2024-05-15 04:25:44.573209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.797 [2024-05-15 04:25:44.573233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.797 [2024-05-15 04:25:44.573247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.797 [2024-05-15 04:25:44.576882] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.797 [2024-05-15 04:25:44.585881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.797 [2024-05-15 04:25:44.586383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.586584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.586607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.797 [2024-05-15 04:25:44.586622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.797 [2024-05-15 04:25:44.586870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.797 [2024-05-15 04:25:44.587125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.797 [2024-05-15 04:25:44.587149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.797 [2024-05-15 04:25:44.587164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.797 [2024-05-15 04:25:44.590795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.797 [2024-05-15 04:25:44.599809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.797 [2024-05-15 04:25:44.600293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.600557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.797 [2024-05-15 04:25:44.600582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.797 [2024-05-15 04:25:44.600597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.600866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.601124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.601148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.601163] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.604799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.613837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.614345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.614532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.614558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.614573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.614840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.615098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.615122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.615137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.618773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.627784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.628424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.628681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.628709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.628726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.628976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.629222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.629245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.629260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.632889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.641698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.642169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.642417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.642441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.642456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.642708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.642968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.642992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.643007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.646688] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.655810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.656286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.656568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.656593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.656609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.656869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.657135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.657159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.657177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.660813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.669831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.670315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.670505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.670530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.670545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.670809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.671066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.671090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.671104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.674736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.683756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.684252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.684491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.684520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.684535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.684782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.685041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.685065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.685080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.688714] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.697716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.698307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.698566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.698605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.698620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.698875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.699131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.699156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.699171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.702799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.711809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.712270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.712488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.712515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.798 [2024-05-15 04:25:44.712532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.798 [2024-05-15 04:25:44.712773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.798 [2024-05-15 04:25:44.713030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.798 [2024-05-15 04:25:44.713054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.798 [2024-05-15 04:25:44.713069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.798 [2024-05-15 04:25:44.716703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.798 [2024-05-15 04:25:44.725715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.798 [2024-05-15 04:25:44.726201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.726420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.798 [2024-05-15 04:25:44.726443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.799 [2024-05-15 04:25:44.726465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.799 [2024-05-15 04:25:44.726720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.799 [2024-05-15 04:25:44.726978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.799 [2024-05-15 04:25:44.727002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.799 [2024-05-15 04:25:44.727017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.799 [2024-05-15 04:25:44.730650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.799 [2024-05-15 04:25:44.739675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.799 [2024-05-15 04:25:44.740307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.740588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.740616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.799 [2024-05-15 04:25:44.740632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.799 [2024-05-15 04:25:44.740874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.799 [2024-05-15 04:25:44.741130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.799 [2024-05-15 04:25:44.741154] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.799 [2024-05-15 04:25:44.741169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.799 [2024-05-15 04:25:44.744806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.799 [2024-05-15 04:25:44.753604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.799 [2024-05-15 04:25:44.754058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.754279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.754307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.799 [2024-05-15 04:25:44.754323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.799 [2024-05-15 04:25:44.754565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.799 [2024-05-15 04:25:44.754810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.799 [2024-05-15 04:25:44.754834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.799 [2024-05-15 04:25:44.754848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.799 [2024-05-15 04:25:44.758489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.799 [2024-05-15 04:25:44.767708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.799 [2024-05-15 04:25:44.768171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.768392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.768420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.799 [2024-05-15 04:25:44.768437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.799 [2024-05-15 04:25:44.768684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.799 [2024-05-15 04:25:44.768940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.799 [2024-05-15 04:25:44.768964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.799 [2024-05-15 04:25:44.768979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.799 [2024-05-15 04:25:44.772610] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.799 [2024-05-15 04:25:44.781613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.799 [2024-05-15 04:25:44.782076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.782276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.782316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.799 [2024-05-15 04:25:44.782331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.799 [2024-05-15 04:25:44.782580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.799 [2024-05-15 04:25:44.782835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.799 [2024-05-15 04:25:44.782859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.799 [2024-05-15 04:25:44.782874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.799 [2024-05-15 04:25:44.786517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.799 [2024-05-15 04:25:44.795519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.799 [2024-05-15 04:25:44.796005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.796188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.796216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.799 [2024-05-15 04:25:44.796233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.799 [2024-05-15 04:25:44.796474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.799 [2024-05-15 04:25:44.796719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.799 [2024-05-15 04:25:44.796743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.799 [2024-05-15 04:25:44.796757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.799 [2024-05-15 04:25:44.800400] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.799 [2024-05-15 04:25:44.809647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.799 [2024-05-15 04:25:44.810120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.810343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.799 [2024-05-15 04:25:44.810371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:56.799 [2024-05-15 04:25:44.810387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:56.799 [2024-05-15 04:25:44.810629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:56.799 [2024-05-15 04:25:44.810880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.799 [2024-05-15 04:25:44.810905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.799 [2024-05-15 04:25:44.810919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.059 [2024-05-15 04:25:44.814590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.059 [2024-05-15 04:25:44.823618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.059 [2024-05-15 04:25:44.824078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.059 [2024-05-15 04:25:44.824278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.059 [2024-05-15 04:25:44.824308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.059 [2024-05-15 04:25:44.824325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.059 [2024-05-15 04:25:44.824566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.059 [2024-05-15 04:25:44.824812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.059 [2024-05-15 04:25:44.824836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.059 [2024-05-15 04:25:44.824850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.059 [2024-05-15 04:25:44.828493] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.059 [2024-05-15 04:25:44.837740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.059 [2024-05-15 04:25:44.838245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.059 [2024-05-15 04:25:44.838470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.059 [2024-05-15 04:25:44.838498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.838515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.838756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.839014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.839038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.839053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.842683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.851686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.852173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.852397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.852425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.852441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.852682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.852928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.852968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.852984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.856614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.865627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.866122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.866382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.866408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.866423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.866685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.866941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.866965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.866980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.870614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.879640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.880129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.880327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.880356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.880374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.880617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.880863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.880887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.880902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.884546] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.893560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.894035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.894229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.894257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.894274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.894516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.894769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.894794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.894815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.898518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.907677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.908220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.908510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.908537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.908554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.908795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.909051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.909075] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.909089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.912725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.921732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.922212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.922403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.922432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.922449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.922691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.922946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.922970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.922985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.926616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.935629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.936094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.936317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.936345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.936362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.936603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.936849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.936872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.936887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.940554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.949572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.950041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.950222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.950250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.950267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.950508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.950754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.950778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.950792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.954440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.060 [2024-05-15 04:25:44.963664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.060 [2024-05-15 04:25:44.964168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.964413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.060 [2024-05-15 04:25:44.964441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.060 [2024-05-15 04:25:44.964458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.060 [2024-05-15 04:25:44.964699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.060 [2024-05-15 04:25:44.964956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.060 [2024-05-15 04:25:44.964981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.060 [2024-05-15 04:25:44.964995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.060 [2024-05-15 04:25:44.968623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.061 [2024-05-15 04:25:44.977660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.061 [2024-05-15 04:25:44.978127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:44.978342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:44.978370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.061 [2024-05-15 04:25:44.978387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.061 [2024-05-15 04:25:44.978629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.061 [2024-05-15 04:25:44.978874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.061 [2024-05-15 04:25:44.978898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.061 [2024-05-15 04:25:44.978913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.061 [2024-05-15 04:25:44.982552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.061 [2024-05-15 04:25:44.991562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.061 [2024-05-15 04:25:44.992018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:44.992248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:44.992275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.061 [2024-05-15 04:25:44.992292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.061 [2024-05-15 04:25:44.992533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.061 [2024-05-15 04:25:44.992779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.061 [2024-05-15 04:25:44.992802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.061 [2024-05-15 04:25:44.992817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.061 [2024-05-15 04:25:44.996460] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.061 [2024-05-15 04:25:45.005466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.061 [2024-05-15 04:25:45.005926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.006154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.006182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.061 [2024-05-15 04:25:45.006199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.061 [2024-05-15 04:25:45.006440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.061 [2024-05-15 04:25:45.006686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.061 [2024-05-15 04:25:45.006710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.061 [2024-05-15 04:25:45.006725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.061 [2024-05-15 04:25:45.010372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.061 [2024-05-15 04:25:45.019376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.061 [2024-05-15 04:25:45.019867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.020092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.020121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.061 [2024-05-15 04:25:45.020138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.061 [2024-05-15 04:25:45.020379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.061 [2024-05-15 04:25:45.020625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.061 [2024-05-15 04:25:45.020648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.061 [2024-05-15 04:25:45.020663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.061 [2024-05-15 04:25:45.024305] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.061 [2024-05-15 04:25:45.033314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.061 [2024-05-15 04:25:45.033812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.034028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.034057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.061 [2024-05-15 04:25:45.034074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.061 [2024-05-15 04:25:45.034315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.061 [2024-05-15 04:25:45.034562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.061 [2024-05-15 04:25:45.034585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.061 [2024-05-15 04:25:45.034600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.061 [2024-05-15 04:25:45.038257] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.061 [2024-05-15 04:25:45.047264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.061 [2024-05-15 04:25:45.047756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.047983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.048012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.061 [2024-05-15 04:25:45.048029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.061 [2024-05-15 04:25:45.048271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.061 [2024-05-15 04:25:45.048517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.061 [2024-05-15 04:25:45.048540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.061 [2024-05-15 04:25:45.048555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.061 [2024-05-15 04:25:45.052198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.061 [2024-05-15 04:25:45.061215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.061 [2024-05-15 04:25:45.061699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.061888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.061 [2024-05-15 04:25:45.061916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.061 [2024-05-15 04:25:45.061942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.061 [2024-05-15 04:25:45.062186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.061 [2024-05-15 04:25:45.062433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.061 [2024-05-15 04:25:45.062456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.061 [2024-05-15 04:25:45.062470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.061 [2024-05-15 04:25:45.066111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.321 [2024-05-15 04:25:45.075173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.321 [2024-05-15 04:25:45.075651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.075900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.075943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.321 [2024-05-15 04:25:45.075963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.321 [2024-05-15 04:25:45.076205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.321 [2024-05-15 04:25:45.076451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.321 [2024-05-15 04:25:45.076474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.321 [2024-05-15 04:25:45.076489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.321 [2024-05-15 04:25:45.080157] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.321 [2024-05-15 04:25:45.089165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.321 [2024-05-15 04:25:45.089596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.089815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.089843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.321 [2024-05-15 04:25:45.089860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.321 [2024-05-15 04:25:45.090111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.321 [2024-05-15 04:25:45.090358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.321 [2024-05-15 04:25:45.090382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.321 [2024-05-15 04:25:45.090396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.321 [2024-05-15 04:25:45.094037] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.321 [2024-05-15 04:25:45.103263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.321 [2024-05-15 04:25:45.103760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.103983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.104012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.321 [2024-05-15 04:25:45.104029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.321 [2024-05-15 04:25:45.104270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.321 [2024-05-15 04:25:45.104516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.321 [2024-05-15 04:25:45.104539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.321 [2024-05-15 04:25:45.104554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.321 [2024-05-15 04:25:45.108201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.321 [2024-05-15 04:25:45.117214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.321 [2024-05-15 04:25:45.117710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.117909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.321 [2024-05-15 04:25:45.117948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.321 [2024-05-15 04:25:45.117974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.321 [2024-05-15 04:25:45.118217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.321 [2024-05-15 04:25:45.118463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.321 [2024-05-15 04:25:45.118486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.321 [2024-05-15 04:25:45.118501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.321 [2024-05-15 04:25:45.122142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.131152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.131632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.131820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.131848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.131865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.132117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.132364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.132387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.132402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.136041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.145055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.145533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.145723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.145751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.145768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.146022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.146269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.146293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.146307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.149989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.159098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.159604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.159846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.159874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.159891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.160149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.160395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.160419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.160439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.164109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.173124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.173603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.173847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.173874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.173891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.174141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.174387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.174410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.174425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.178065] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.187074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.187609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.187831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.187858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.187875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.188129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.188375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.188398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.188413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.192050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.201062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.201542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.201798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.201825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.201842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.202094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.202347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.202370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.202385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.206025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.215036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.215485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.215697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.215724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.215741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.215994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.216241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.216264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.216279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.219908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.229139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.229596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.229842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.229870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.229886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.230139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.230386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.230409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.230424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.234067] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.243084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.243554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.243741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.243771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.322 [2024-05-15 04:25:45.243789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.322 [2024-05-15 04:25:45.244044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.322 [2024-05-15 04:25:45.244291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.322 [2024-05-15 04:25:45.244320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.322 [2024-05-15 04:25:45.244336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.322 [2024-05-15 04:25:45.247973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.322 [2024-05-15 04:25:45.257191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.322 [2024-05-15 04:25:45.257691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.322 [2024-05-15 04:25:45.257879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.257907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.323 [2024-05-15 04:25:45.257924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.323 [2024-05-15 04:25:45.258177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.323 [2024-05-15 04:25:45.258423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.323 [2024-05-15 04:25:45.258446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.323 [2024-05-15 04:25:45.258461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.323 [2024-05-15 04:25:45.262101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.323 [2024-05-15 04:25:45.271107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.323 [2024-05-15 04:25:45.271564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.271806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.271834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.323 [2024-05-15 04:25:45.271851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.323 [2024-05-15 04:25:45.272103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.323 [2024-05-15 04:25:45.272349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.323 [2024-05-15 04:25:45.272372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.323 [2024-05-15 04:25:45.272387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.323 [2024-05-15 04:25:45.276022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.323 [2024-05-15 04:25:45.285024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.323 [2024-05-15 04:25:45.285498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.285717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.285745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.323 [2024-05-15 04:25:45.285761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.323 [2024-05-15 04:25:45.286015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.323 [2024-05-15 04:25:45.286261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.323 [2024-05-15 04:25:45.286285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.323 [2024-05-15 04:25:45.286305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.323 [2024-05-15 04:25:45.289944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.323 [2024-05-15 04:25:45.298947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.323 [2024-05-15 04:25:45.299403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.299654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.299682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.323 [2024-05-15 04:25:45.299699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.323 [2024-05-15 04:25:45.299953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.323 [2024-05-15 04:25:45.300199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.323 [2024-05-15 04:25:45.300223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.323 [2024-05-15 04:25:45.300237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.323 [2024-05-15 04:25:45.303871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.323 [2024-05-15 04:25:45.312882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.323 [2024-05-15 04:25:45.313328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.313545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.313573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.323 [2024-05-15 04:25:45.313589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.323 [2024-05-15 04:25:45.313831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.323 [2024-05-15 04:25:45.314090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.323 [2024-05-15 04:25:45.314114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.323 [2024-05-15 04:25:45.314128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.323 [2024-05-15 04:25:45.317764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.323 [2024-05-15 04:25:45.326791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.323 [2024-05-15 04:25:45.327270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.327469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.323 [2024-05-15 04:25:45.327498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.323 [2024-05-15 04:25:45.327515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.323 [2024-05-15 04:25:45.327756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.323 [2024-05-15 04:25:45.328013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.323 [2024-05-15 04:25:45.328037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.323 [2024-05-15 04:25:45.328052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.323 [2024-05-15 04:25:45.331705] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.340795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-05-15 04:25:45.341303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.341516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.341544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.584 [2024-05-15 04:25:45.341562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.584 [2024-05-15 04:25:45.341803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.584 [2024-05-15 04:25:45.342060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-05-15 04:25:45.342084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-05-15 04:25:45.342099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-05-15 04:25:45.345739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.354786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-05-15 04:25:45.355294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.355480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.355511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.584 [2024-05-15 04:25:45.355528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.584 [2024-05-15 04:25:45.355772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.584 [2024-05-15 04:25:45.356032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-05-15 04:25:45.356056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-05-15 04:25:45.356071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-05-15 04:25:45.359710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.368755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-05-15 04:25:45.369255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.369503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.369531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.584 [2024-05-15 04:25:45.369548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.584 [2024-05-15 04:25:45.369789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.584 [2024-05-15 04:25:45.370047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-05-15 04:25:45.370071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-05-15 04:25:45.370086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-05-15 04:25:45.373724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.382761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-05-15 04:25:45.383226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.383467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.383494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.584 [2024-05-15 04:25:45.383511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.584 [2024-05-15 04:25:45.383752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.584 [2024-05-15 04:25:45.384010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-05-15 04:25:45.384034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-05-15 04:25:45.384049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-05-15 04:25:45.387687] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.396713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-05-15 04:25:45.397185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.397403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.397431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.584 [2024-05-15 04:25:45.397448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.584 [2024-05-15 04:25:45.397689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.584 [2024-05-15 04:25:45.397958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-05-15 04:25:45.397984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-05-15 04:25:45.397998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-05-15 04:25:45.401692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.410778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-05-15 04:25:45.411269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.411488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.411515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.584 [2024-05-15 04:25:45.411532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.584 [2024-05-15 04:25:45.411773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.584 [2024-05-15 04:25:45.412030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-05-15 04:25:45.412054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-05-15 04:25:45.412068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-05-15 04:25:45.415702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.424744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-05-15 04:25:45.425219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.425434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-05-15 04:25:45.425463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.584 [2024-05-15 04:25:45.425480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.584 [2024-05-15 04:25:45.425722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.584 [2024-05-15 04:25:45.425978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-05-15 04:25:45.426002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-05-15 04:25:45.426017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-05-15 04:25:45.429647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-05-15 04:25:45.438678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.439175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.439425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.439452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.439469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.439710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.439965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.439989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.440004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.443635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.452809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.453285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.453534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.453561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.453577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.453818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.454074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.454098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.454113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.457745] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.466767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.467249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.467496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.467529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.467547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.467788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.468044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.468068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.468083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.471712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.480728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.481216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.481431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.481459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.481476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.481717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.481973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.481998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.482013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.485650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.494661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.495208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.495423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.495450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.495467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.495708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.495963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.495987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.496002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.499637] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.508658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.509145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.509337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.509365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.509388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.509630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.509876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.509898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.509913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.513556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.522564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.523052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.523253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.523280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.523297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.523538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.523784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.523807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.523821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.527468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.536477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.537047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.537301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.537328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.537345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.537586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.537832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.537863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.537881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.541526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.550545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.550995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.551214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.551241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.551259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.551505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.551751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-05-15 04:25:45.551774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-05-15 04:25:45.551789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-05-15 04:25:45.555432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-05-15 04:25:45.564453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-05-15 04:25:45.564943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.565174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-05-15 04:25:45.565202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.585 [2024-05-15 04:25:45.565219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.585 [2024-05-15 04:25:45.565463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.585 [2024-05-15 04:25:45.565709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-05-15 04:25:45.565732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-05-15 04:25:45.565748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-05-15 04:25:45.569388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-05-15 04:25:45.578400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-05-15 04:25:45.578893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-05-15 04:25:45.579131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-05-15 04:25:45.579159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.586 [2024-05-15 04:25:45.579176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.586 [2024-05-15 04:25:45.579418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.586 [2024-05-15 04:25:45.579664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-05-15 04:25:45.579687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-05-15 04:25:45.579702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-05-15 04:25:45.583345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-05-15 04:25:45.592362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-05-15 04:25:45.592816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-05-15 04:25:45.593052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-05-15 04:25:45.593081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.586 [2024-05-15 04:25:45.593099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.586 [2024-05-15 04:25:45.593346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.586 [2024-05-15 04:25:45.593606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-05-15 04:25:45.593631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-05-15 04:25:45.593645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-05-15 04:25:45.597332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.606497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.606973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.607222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.607252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.865 [2024-05-15 04:25:45.607270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.865 [2024-05-15 04:25:45.607520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.865 [2024-05-15 04:25:45.607781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-05-15 04:25:45.607808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-05-15 04:25:45.607823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-05-15 04:25:45.611593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.620600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.621058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.621278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.621306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.865 [2024-05-15 04:25:45.621323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.865 [2024-05-15 04:25:45.621565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.865 [2024-05-15 04:25:45.621810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-05-15 04:25:45.621833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-05-15 04:25:45.621848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-05-15 04:25:45.625491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.634710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.635200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.635388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.635416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.865 [2024-05-15 04:25:45.635433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.865 [2024-05-15 04:25:45.635674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.865 [2024-05-15 04:25:45.635920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-05-15 04:25:45.635958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-05-15 04:25:45.635974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-05-15 04:25:45.639624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.648632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.649114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.649314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.649340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.865 [2024-05-15 04:25:45.649357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.865 [2024-05-15 04:25:45.649599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.865 [2024-05-15 04:25:45.649853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-05-15 04:25:45.649878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-05-15 04:25:45.649893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-05-15 04:25:45.653621] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.662678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.663155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.663406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.663433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.865 [2024-05-15 04:25:45.663450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.865 [2024-05-15 04:25:45.663691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.865 [2024-05-15 04:25:45.663949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-05-15 04:25:45.663973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-05-15 04:25:45.663987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-05-15 04:25:45.667620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.676636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.677077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.677266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.677294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.865 [2024-05-15 04:25:45.677311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.865 [2024-05-15 04:25:45.677552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.865 [2024-05-15 04:25:45.677798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-05-15 04:25:45.677821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-05-15 04:25:45.677841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-05-15 04:25:45.681486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.690703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.691141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.691363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.691390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.865 [2024-05-15 04:25:45.691407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.865 [2024-05-15 04:25:45.691648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.865 [2024-05-15 04:25:45.691895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-05-15 04:25:45.691918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-05-15 04:25:45.691943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-05-15 04:25:45.695594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-05-15 04:25:45.704599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-05-15 04:25:45.705093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.705310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-05-15 04:25:45.705337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.705354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.705595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.705841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.705864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.705879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.709539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.718541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.719000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.719183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.719211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.719234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.719476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.719722] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.719745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.719760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.723409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.732632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.733137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.733370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.733399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.733416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.733658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.733904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.733927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.733950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.737584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.746632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.747126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.747382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.747409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.747426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.747667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.747913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.747945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.747961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.751589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.760587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.761038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.761252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.761279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.761296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.761537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.761783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.761806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.761820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.765479] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.774494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.774982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.775197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.775224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.775241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.775482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.775728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.775752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.775766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.779409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.788428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.788884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.789112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.789142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.789159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.789401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.789646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.789669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.789684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.793326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.802329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.802830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.803055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-05-15 04:25:45.803084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.866 [2024-05-15 04:25:45.803101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.866 [2024-05-15 04:25:45.803343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.866 [2024-05-15 04:25:45.803588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-05-15 04:25:45.803612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-05-15 04:25:45.803626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-05-15 04:25:45.807265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-05-15 04:25:45.816471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-05-15 04:25:45.817040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.817228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.817256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.867 [2024-05-15 04:25:45.817273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.867 [2024-05-15 04:25:45.817514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.867 [2024-05-15 04:25:45.817760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-05-15 04:25:45.817784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-05-15 04:25:45.817799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-05-15 04:25:45.821441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-05-15 04:25:45.830440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-05-15 04:25:45.830944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.831137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.831164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.867 [2024-05-15 04:25:45.831180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.867 [2024-05-15 04:25:45.831421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.867 [2024-05-15 04:25:45.831667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-05-15 04:25:45.831691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-05-15 04:25:45.831706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-05-15 04:25:45.835349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-05-15 04:25:45.844377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-05-15 04:25:45.844872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.845142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.845171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.867 [2024-05-15 04:25:45.845188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.867 [2024-05-15 04:25:45.845430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.867 [2024-05-15 04:25:45.845676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-05-15 04:25:45.845700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-05-15 04:25:45.845715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-05-15 04:25:45.849350] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-05-15 04:25:45.858369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-05-15 04:25:45.858821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.859057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.859092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.867 [2024-05-15 04:25:45.859110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.867 [2024-05-15 04:25:45.859352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.867 [2024-05-15 04:25:45.859598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-05-15 04:25:45.859621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-05-15 04:25:45.859636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-05-15 04:25:45.863273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-05-15 04:25:45.872285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-05-15 04:25:45.872743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.872925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-05-15 04:25:45.872960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:57.867 [2024-05-15 04:25:45.872977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:57.867 [2024-05-15 04:25:45.873218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:57.867 [2024-05-15 04:25:45.873464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-05-15 04:25:45.873487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-05-15 04:25:45.873502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-05-15 04:25:45.877174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.126 [2024-05-15 04:25:45.886225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.126 [2024-05-15 04:25:45.886767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.886986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.887015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.887032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.887274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.887520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.887543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.887558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.891214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.900234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.900689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.900895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.900922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.900962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.901214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.901472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.901497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.901512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.905250] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.914307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.914772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.915023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.915052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.915069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.915311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.915557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.915580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.915595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.919233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.928231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.928726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.928982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.929011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.929028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.929269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.929515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.929538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.929552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.933209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.942222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.942720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.942966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.942995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.943012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.943260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.943506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.943530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.943545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.947184] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.956183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.956663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.956920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.956957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.956976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.957217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.957463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.957486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.957501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.961136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.970135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.970587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.970773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.970800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.970817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.971069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.971315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.971339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.971353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.974989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.984199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.984705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.984922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.984957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.984975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.985216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.985468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.985491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.985506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:45.989140] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:45.998156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:45.998614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.998832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:45.998860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:45.998876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:45.999129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:45.999376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.127 [2024-05-15 04:25:45.999400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.127 [2024-05-15 04:25:45.999414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.127 [2024-05-15 04:25:46.003053] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.127 [2024-05-15 04:25:46.012059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.127 [2024-05-15 04:25:46.012539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:46.012765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.127 [2024-05-15 04:25:46.012792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.127 [2024-05-15 04:25:46.012809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.127 [2024-05-15 04:25:46.013061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.127 [2024-05-15 04:25:46.013308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.013331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.013345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.016980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.025979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.026464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.026682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.026712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.026729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.026983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.027229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.027261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.027277] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.030906] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.039918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.040401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.040612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.040640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.040657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.040897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.041153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.041177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.041192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.044822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.053821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.054309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.054496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.054524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.054541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.054782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.055039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.055063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.055077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.058708] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.067919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.068379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.068621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.068648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.068665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.068906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.069163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.069187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.069208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.072840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.081841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.082328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.082576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.082603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.082620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.082861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.083117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.083150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.083165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.086799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.095800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.096288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.096502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.096529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.096546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.096787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.097045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.097069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.097083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.100710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.109709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.110198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.110381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.110409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.110425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.110666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.110912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.110945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.110961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.114599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.123811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.124273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.124490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.124517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.124534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.124775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.125032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.125056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.125070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.128 [2024-05-15 04:25:46.128710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.128 [2024-05-15 04:25:46.137727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.128 [2024-05-15 04:25:46.138216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.138406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.128 [2024-05-15 04:25:46.138441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.128 [2024-05-15 04:25:46.138460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.128 [2024-05-15 04:25:46.138702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.128 [2024-05-15 04:25:46.138966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.128 [2024-05-15 04:25:46.138990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.128 [2024-05-15 04:25:46.139005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.388 [2024-05-15 04:25:46.142661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.388 [2024-05-15 04:25:46.151684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.388 [2024-05-15 04:25:46.152182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.388 [2024-05-15 04:25:46.152378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.388 [2024-05-15 04:25:46.152408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.388 [2024-05-15 04:25:46.152425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.388 [2024-05-15 04:25:46.152676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.388 [2024-05-15 04:25:46.152927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.388 [2024-05-15 04:25:46.152962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.388 [2024-05-15 04:25:46.152980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.388 [2024-05-15 04:25:46.156708] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.388 [2024-05-15 04:25:46.165763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.388 [2024-05-15 04:25:46.166230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.388 [2024-05-15 04:25:46.166450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.388 [2024-05-15 04:25:46.166477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.388 [2024-05-15 04:25:46.166494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.388 [2024-05-15 04:25:46.166735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.388 [2024-05-15 04:25:46.166993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.388 [2024-05-15 04:25:46.167017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.388 [2024-05-15 04:25:46.167032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.388 [2024-05-15 04:25:46.170662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.388 [2024-05-15 04:25:46.179662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.388 [2024-05-15 04:25:46.180151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.388 [2024-05-15 04:25:46.180364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.180392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.180409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.180650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.180896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.180919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.180944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.184578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.193576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.194064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.194320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.194347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.194364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.194605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.194850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.194873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.194888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.198524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.207525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.207984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.208199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.208227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.208244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.208485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.208731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.208754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.208768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.212413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.221625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.222080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.222272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.222299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.222316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.222557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.222803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.222826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.222841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.226480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.235691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.236151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.236334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.236361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.236378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.236619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.236865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.236888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.236902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.240552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.249760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.250200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.250445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.250478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.250496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.250738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.250995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.251019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.251034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.254663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.263677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.264166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.264379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.264406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.264423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.264664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.264910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.264944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.264961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.268593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.277594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.278096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.278339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.278367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.278384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.278625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.278871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.278894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.278908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.282550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.291553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.292059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.292256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.292283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.292305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.292548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.292793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.292816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.292831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.389 [2024-05-15 04:25:46.296474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.389 [2024-05-15 04:25:46.305474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.389 [2024-05-15 04:25:46.305962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.306208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.389 [2024-05-15 04:25:46.306236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.389 [2024-05-15 04:25:46.306253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.389 [2024-05-15 04:25:46.306494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.389 [2024-05-15 04:25:46.306749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.389 [2024-05-15 04:25:46.306771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.389 [2024-05-15 04:25:46.306786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.390 [2024-05-15 04:25:46.310429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.390 [2024-05-15 04:25:46.319436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.390 [2024-05-15 04:25:46.319913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.320141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.320171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.390 [2024-05-15 04:25:46.320189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.390 [2024-05-15 04:25:46.320431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.390 [2024-05-15 04:25:46.320677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.390 [2024-05-15 04:25:46.320700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.390 [2024-05-15 04:25:46.320715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.390 [2024-05-15 04:25:46.324354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.390 [2024-05-15 04:25:46.333365] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.390 [2024-05-15 04:25:46.333817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.334066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.334095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.390 [2024-05-15 04:25:46.334112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.390 [2024-05-15 04:25:46.334360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.390 [2024-05-15 04:25:46.334605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.390 [2024-05-15 04:25:46.334628] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.390 [2024-05-15 04:25:46.334643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.390 [2024-05-15 04:25:46.338282] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.390 [2024-05-15 04:25:46.347294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.390 [2024-05-15 04:25:46.347782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.348024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.348052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.390 [2024-05-15 04:25:46.348069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.390 [2024-05-15 04:25:46.348310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.390 [2024-05-15 04:25:46.348557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.390 [2024-05-15 04:25:46.348579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.390 [2024-05-15 04:25:46.348594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.390 [2024-05-15 04:25:46.352232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.390 [2024-05-15 04:25:46.361229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.390 [2024-05-15 04:25:46.361720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.361965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.361994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.390 [2024-05-15 04:25:46.362011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.390 [2024-05-15 04:25:46.362253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.390 [2024-05-15 04:25:46.362499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.390 [2024-05-15 04:25:46.362522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.390 [2024-05-15 04:25:46.362536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3487478 Killed "${NVMF_APP[@]}" "$@" 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:58.390 [2024-05-15 04:25:46.366175] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3488431 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3488431 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3488431 ']' 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:58.390 04:25:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:58.390 [2024-05-15 04:25:46.375185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.390 [2024-05-15 04:25:46.375648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.375863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.375891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.390 [2024-05-15 04:25:46.375908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.390 [2024-05-15 04:25:46.376157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.390 [2024-05-15 04:25:46.376404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.390 [2024-05-15 04:25:46.376427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.390 [2024-05-15 04:25:46.376442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.390 [2024-05-15 04:25:46.380154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.390 [2024-05-15 04:25:46.389161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.390 [2024-05-15 04:25:46.389615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.389854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.390 [2024-05-15 04:25:46.389881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.390 [2024-05-15 04:25:46.389898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.390 [2024-05-15 04:25:46.390149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.390 [2024-05-15 04:25:46.390396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.390 [2024-05-15 04:25:46.390419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.390 [2024-05-15 04:25:46.390434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.390 [2024-05-15 04:25:46.394073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.650 [2024-05-15 04:25:46.403137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.650 [2024-05-15 04:25:46.403626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.403851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.403880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.650 [2024-05-15 04:25:46.403897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.650 [2024-05-15 04:25:46.404163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.650 [2024-05-15 04:25:46.404430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.650 [2024-05-15 04:25:46.404451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.650 [2024-05-15 04:25:46.404472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.650 [2024-05-15 04:25:46.407948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.650 [2024-05-15 04:25:46.416552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.650 [2024-05-15 04:25:46.416603] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:24:58.650 [2024-05-15 04:25:46.416658] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.650 [2024-05-15 04:25:46.417053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.417228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.417255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.650 [2024-05-15 04:25:46.417270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.650 [2024-05-15 04:25:46.417512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.650 [2024-05-15 04:25:46.417713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.650 [2024-05-15 04:25:46.417732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.650 [2024-05-15 04:25:46.417744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.650 [2024-05-15 04:25:46.420896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.650 [2024-05-15 04:25:46.430082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.650 [2024-05-15 04:25:46.430561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.430781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.430805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.650 [2024-05-15 04:25:46.430821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.650 [2024-05-15 04:25:46.431091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.650 [2024-05-15 04:25:46.431317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.650 [2024-05-15 04:25:46.431337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.650 [2024-05-15 04:25:46.431349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.650 [2024-05-15 04:25:46.434377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.650 [2024-05-15 04:25:46.443414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.650 [2024-05-15 04:25:46.443885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.444151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.444178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.650 [2024-05-15 04:25:46.444199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.650 [2024-05-15 04:25:46.444449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.650 [2024-05-15 04:25:46.444650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.650 [2024-05-15 04:25:46.444669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.650 [2024-05-15 04:25:46.444681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.650 [2024-05-15 04:25:46.447744] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.650 [2024-05-15 04:25:46.456751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.650 [2024-05-15 04:25:46.457490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.457741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.457769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.650 [2024-05-15 04:25:46.457786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.650 [2024-05-15 04:25:46.458045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.650 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.650 [2024-05-15 04:25:46.458277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.650 [2024-05-15 04:25:46.458312] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.650 [2024-05-15 04:25:46.458324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.650 [2024-05-15 04:25:46.461461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.650 [2024-05-15 04:25:46.470651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.650 [2024-05-15 04:25:46.471139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.471350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-05-15 04:25:46.471375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.650 [2024-05-15 04:25:46.471392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.471650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.471917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.471949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.471964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.475555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.484770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.485269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.485463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.485488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.485509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.485741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.486027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.486048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.486061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.489634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.498685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.499171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.499345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.499369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.499385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.499521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:58.651 [2024-05-15 04:25:46.499625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.499827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.499846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.499858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.503455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.512603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.513521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.513995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.514040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.514061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.514301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.514510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.514530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.514546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.518092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.526694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.527340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.527687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.527714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.527731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.528021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.528247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.528269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.528282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.531873] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.540640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.541175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.541381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.541406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.541422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.541673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.541919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.541960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.541991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.545620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.554608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.555084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.555342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.555370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.555388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.555630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.555886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.555910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.555944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.559538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.568612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.569251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.569573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.569599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.569619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.569908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.570171] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.570193] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.570209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.573855] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.582521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.583034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.583221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.583250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.583267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.583537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.583784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.583808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.583823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.587397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.651 [2024-05-15 04:25:46.596433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.651 [2024-05-15 04:25:46.596922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.597151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-05-15 04:25:46.597177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.651 [2024-05-15 04:25:46.597192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.651 [2024-05-15 04:25:46.597449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.651 [2024-05-15 04:25:46.597695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.651 [2024-05-15 04:25:46.597719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.651 [2024-05-15 04:25:46.597734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.651 [2024-05-15 04:25:46.601328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.652 [2024-05-15 04:25:46.610285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.652 [2024-05-15 04:25:46.610771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.611014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.611041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.652 [2024-05-15 04:25:46.611057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.652 [2024-05-15 04:25:46.611321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.652 [2024-05-15 04:25:46.611577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.652 [2024-05-15 04:25:46.611601] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.652 [2024-05-15 04:25:46.611616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.652 [2024-05-15 04:25:46.615231] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.652 [2024-05-15 04:25:46.619572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.652 [2024-05-15 04:25:46.619608] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.652 [2024-05-15 04:25:46.619624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.652 [2024-05-15 04:25:46.619637] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.652 [2024-05-15 04:25:46.619648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.652 [2024-05-15 04:25:46.619756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.652 [2024-05-15 04:25:46.619907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.652 [2024-05-15 04:25:46.619925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.652 [2024-05-15 04:25:46.623943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.652 [2024-05-15 04:25:46.624391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.624577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.624602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.652 [2024-05-15 04:25:46.624619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.652 [2024-05-15 04:25:46.624842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.652 [2024-05-15 04:25:46.625074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.652 [2024-05-15 04:25:46.625096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.652 [2024-05-15 04:25:46.625112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.652 [2024-05-15 04:25:46.628394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.652 [2024-05-15 04:25:46.637601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.652 [2024-05-15 04:25:46.638242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.638487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.638513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.652 [2024-05-15 04:25:46.638535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.652 [2024-05-15 04:25:46.638780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.652 [2024-05-15 04:25:46.639040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.652 [2024-05-15 04:25:46.639064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.652 [2024-05-15 04:25:46.639083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.652 [2024-05-15 04:25:46.642408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.652 [2024-05-15 04:25:46.651366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.652 [2024-05-15 04:25:46.651983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.652217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-05-15 04:25:46.652243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.652 [2024-05-15 04:25:46.652265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.652 [2024-05-15 04:25:46.652509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.652 [2024-05-15 04:25:46.652732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.652 [2024-05-15 04:25:46.652753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.652 [2024-05-15 04:25:46.652770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.652 [2024-05-15 04:25:46.656003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.912 [2024-05-15 04:25:46.665294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.912 [2024-05-15 04:25:46.665827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.912 [2024-05-15 04:25:46.666018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.912 [2024-05-15 04:25:46.666046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.912 [2024-05-15 04:25:46.666068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.912 [2024-05-15 04:25:46.666298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.912 [2024-05-15 04:25:46.666531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.912 [2024-05-15 04:25:46.666554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.912 [2024-05-15 04:25:46.666572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.912 [2024-05-15 04:25:46.669989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.912 [2024-05-15 04:25:46.678945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.912 [2024-05-15 04:25:46.679492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.912 [2024-05-15 04:25:46.679713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.912 [2024-05-15 04:25:46.679739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.912 [2024-05-15 04:25:46.679759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.912 [2024-05-15 04:25:46.680009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.912 [2024-05-15 04:25:46.680230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.912 [2024-05-15 04:25:46.680250] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.912 [2024-05-15 04:25:46.680267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.912 [2024-05-15 04:25:46.683521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.912 [2024-05-15 04:25:46.692524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.912 [2024-05-15 04:25:46.693139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.912 [2024-05-15 04:25:46.693371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.912 [2024-05-15 04:25:46.693397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.912 [2024-05-15 04:25:46.693419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.912 [2024-05-15 04:25:46.693664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.693886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.693922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.693948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.697173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.706258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.706864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.707079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.707126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.707147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.707392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.707612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.707633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.707650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.710849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.719828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.720286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.720467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.720494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.720510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.720742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.720967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.720988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.721001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.724219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.733378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.733836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.734021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.734048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.734072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.734305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.734519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.734539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.734552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.737800] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.746953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.747399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.747627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.747652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.747668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.747885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.748144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.748166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.748179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.751491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.760496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.760937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.761140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.761165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.761180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.761397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.761628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.761648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.761661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.764840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.774022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.774438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.774639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.774665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.774685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.774917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.775142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.775163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.775176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.778428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.787631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.788082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.788282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.788308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.788323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.788540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.788771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.788791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.788805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.792021] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.801216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.913 [2024-05-15 04:25:46.801627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.801821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.913 [2024-05-15 04:25:46.801847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.913 [2024-05-15 04:25:46.801862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.913 [2024-05-15 04:25:46.802087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.913 [2024-05-15 04:25:46.802323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.913 [2024-05-15 04:25:46.802343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.913 [2024-05-15 04:25:46.802357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.913 [2024-05-15 04:25:46.805646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.913 [2024-05-15 04:25:46.814768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.815205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.815383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.815407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.815422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.815645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.815866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.815887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.815900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.819133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.828315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.828762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.828964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.828990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.829005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.829222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.829453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.829473] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.829485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.832736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.841899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.842332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.842510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.842534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.842549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.842767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.843006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.843027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.843039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.846293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.855544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.855993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.856192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.856217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.856232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.856450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.856686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.856706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.856720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.859922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.869061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.869521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.869715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.869741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.869756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.869981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.870218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.870239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.870252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.873504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.882664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.883103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.883300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.883325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.883340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.883569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.883784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.883804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.883817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.887056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.896295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.896753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.896927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.896960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.896976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.897193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.897424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.897449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.897463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.900720] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.909797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.910253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.910453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.910478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.910493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.910710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.910940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.910961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.910991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.914 [2024-05-15 04:25:46.914386] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.914 [2024-05-15 04:25:46.923642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.914 [2024-05-15 04:25:46.924101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.924268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.914 [2024-05-15 04:25:46.924294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:58.914 [2024-05-15 04:25:46.924310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:58.914 [2024-05-15 04:25:46.924535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:58.914 [2024-05-15 04:25:46.924773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.914 [2024-05-15 04:25:46.924808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.914 [2024-05-15 04:25:46.924822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.174 [2024-05-15 04:25:46.928257] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.174 [2024-05-15 04:25:46.937164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.174 [2024-05-15 04:25:46.937600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-05-15 04:25:46.937781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-05-15 04:25:46.937806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.174 [2024-05-15 04:25:46.937822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.174 [2024-05-15 04:25:46.938048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.174 [2024-05-15 04:25:46.938284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.174 [2024-05-15 04:25:46.938305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.174 [2024-05-15 04:25:46.938324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.174 [2024-05-15 04:25:46.941590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.174 [2024-05-15 04:25:46.950780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.174 [2024-05-15 04:25:46.951242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-05-15 04:25:46.951477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-05-15 04:25:46.951502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.174 [2024-05-15 04:25:46.951517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.174 [2024-05-15 04:25:46.951734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.174 [2024-05-15 04:25:46.951973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.174 [2024-05-15 04:25:46.951994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.174 [2024-05-15 04:25:46.952007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.174 [2024-05-15 04:25:46.955317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.174 [2024-05-15 04:25:46.964291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.174 [2024-05-15 04:25:46.964746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-05-15 04:25:46.964946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-05-15 04:25:46.964973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.174 [2024-05-15 04:25:46.964988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.174 [2024-05-15 04:25:46.965206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.174 [2024-05-15 04:25:46.965435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:46.965456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:46.965469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:46.968678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:46.977846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:46.978299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:46.978472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:46.978499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:46.978515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:46.978744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:46.978968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:46.978990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:46.979003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:46.982247] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:46.991397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:46.991823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:46.992014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:46.992041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:46.992056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:46.992286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:46.992500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:46.992521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:46.992534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:46.995733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.004952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.005386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.005586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.005611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:47.005626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:47.005843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:47.006082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:47.006103] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:47.006116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:47.009369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.018608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.019036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.019250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.019275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:47.019290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:47.019508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:47.019738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:47.019758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:47.019771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:47.023044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.032167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.032590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.032785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.032810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:47.032825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:47.033050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:47.033286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:47.033307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:47.033320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:47.036530] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.045704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.046163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.046333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.046358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:47.046373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:47.046590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:47.046820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:47.046840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:47.046853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:47.050134] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.059272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.059699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.059892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.059918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:47.059939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:47.060158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:47.060390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:47.060410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:47.060423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:47.063674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.072786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.073205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.073403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.073429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:47.073444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:47.073661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:47.073890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:47.073911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:47.073924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:47.077165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.086323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.086754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.086958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.086985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.175 [2024-05-15 04:25:47.087001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.175 [2024-05-15 04:25:47.087230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.175 [2024-05-15 04:25:47.087445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.175 [2024-05-15 04:25:47.087466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.175 [2024-05-15 04:25:47.087479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.175 [2024-05-15 04:25:47.090727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.175 [2024-05-15 04:25:47.099882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.175 [2024-05-15 04:25:47.100297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.100502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-05-15 04:25:47.100527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.176 [2024-05-15 04:25:47.100543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.176 [2024-05-15 04:25:47.100772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.176 [2024-05-15 04:25:47.100995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.176 [2024-05-15 04:25:47.101016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.176 [2024-05-15 04:25:47.101029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.176 [2024-05-15 04:25:47.104315] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.176 [2024-05-15 04:25:47.113473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.176 [2024-05-15 04:25:47.113886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.114096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.114127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.176 [2024-05-15 04:25:47.114143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.176 [2024-05-15 04:25:47.114360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.176 [2024-05-15 04:25:47.114589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.176 [2024-05-15 04:25:47.114610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.176 [2024-05-15 04:25:47.114622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.176 [2024-05-15 04:25:47.117777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.176 [2024-05-15 04:25:47.127167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.176 [2024-05-15 04:25:47.127618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.127845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.127869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.176 [2024-05-15 04:25:47.127885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.176 [2024-05-15 04:25:47.128111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.176 [2024-05-15 04:25:47.128345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.176 [2024-05-15 04:25:47.128366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.176 [2024-05-15 04:25:47.128379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.176 [2024-05-15 04:25:47.131677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.176 [2024-05-15 04:25:47.140846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.176 [2024-05-15 04:25:47.141317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.141516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.141541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.176 [2024-05-15 04:25:47.141556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.176 [2024-05-15 04:25:47.141773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.176 [2024-05-15 04:25:47.142032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.176 [2024-05-15 04:25:47.142054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.176 [2024-05-15 04:25:47.142068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.176 [2024-05-15 04:25:47.145338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.176 [2024-05-15 04:25:47.154446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.176 [2024-05-15 04:25:47.154874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.155071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.155097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.176 [2024-05-15 04:25:47.155118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.176 [2024-05-15 04:25:47.155349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.176 [2024-05-15 04:25:47.155564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.176 [2024-05-15 04:25:47.155585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.176 [2024-05-15 04:25:47.155597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.176 [2024-05-15 04:25:47.158821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.176 [2024-05-15 04:25:47.167980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.176 [2024-05-15 04:25:47.168397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.168599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.168624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.176 [2024-05-15 04:25:47.168639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.176 [2024-05-15 04:25:47.168856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.176 [2024-05-15 04:25:47.169094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.176 [2024-05-15 04:25:47.169117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.176 [2024-05-15 04:25:47.169130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.176 [2024-05-15 04:25:47.172534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.176 [2024-05-15 04:25:47.181631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.176 [2024-05-15 04:25:47.182093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.182281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-05-15 04:25:47.182306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.176 [2024-05-15 04:25:47.182321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.176 [2024-05-15 04:25:47.182551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.176 [2024-05-15 04:25:47.182766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.176 [2024-05-15 04:25:47.182786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.176 [2024-05-15 04:25:47.182799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.176 [2024-05-15 04:25:47.186119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.435 [2024-05-15 04:25:47.195223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.435 [2024-05-15 04:25:47.195646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.195844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.195869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.435 [2024-05-15 04:25:47.195885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.435 [2024-05-15 04:25:47.196118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.435 [2024-05-15 04:25:47.196352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.435 [2024-05-15 04:25:47.196373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.435 [2024-05-15 04:25:47.196386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.435 [2024-05-15 04:25:47.199704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.435 [2024-05-15 04:25:47.208867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.435 [2024-05-15 04:25:47.209354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.209548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.209573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.435 [2024-05-15 04:25:47.209588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.435 [2024-05-15 04:25:47.209805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.435 [2024-05-15 04:25:47.210044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.435 [2024-05-15 04:25:47.210065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.435 [2024-05-15 04:25:47.210078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.435 [2024-05-15 04:25:47.213332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.435 [2024-05-15 04:25:47.222525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.435 [2024-05-15 04:25:47.222951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.223144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.223168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.435 [2024-05-15 04:25:47.223183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.435 [2024-05-15 04:25:47.223400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.435 [2024-05-15 04:25:47.223631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.435 [2024-05-15 04:25:47.223651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.435 [2024-05-15 04:25:47.223664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.435 [2024-05-15 04:25:47.226871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.435 [2024-05-15 04:25:47.236047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.435 [2024-05-15 04:25:47.236481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.236639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.236664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.435 [2024-05-15 04:25:47.236679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.435 [2024-05-15 04:25:47.236909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.435 [2024-05-15 04:25:47.237136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.435 [2024-05-15 04:25:47.237157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.435 [2024-05-15 04:25:47.237170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.435 [2024-05-15 04:25:47.240429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.435 [2024-05-15 04:25:47.249639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.435 [2024-05-15 04:25:47.250134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.250308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.250332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.435 [2024-05-15 04:25:47.250348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.435 [2024-05-15 04:25:47.250577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.435 [2024-05-15 04:25:47.250791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.435 [2024-05-15 04:25:47.250811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.435 [2024-05-15 04:25:47.250824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.435 [2024-05-15 04:25:47.254064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.435 [2024-05-15 04:25:47.263199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.435 [2024-05-15 04:25:47.263650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.263819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-05-15 04:25:47.263844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.435 [2024-05-15 04:25:47.263859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.264086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.264322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.264342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.264355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.267608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.276737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.277197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.277414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.277440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.277455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.277682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.277897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.277923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.277960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.281222] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.290397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.290841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.291019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.291045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.291060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.291291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.291506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.291527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.291540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.294791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.304024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.304479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.304671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.304696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.304711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.304950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.305167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.305188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.305200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.308452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.317622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.318030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.318227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.318252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.318267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.318484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.318714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.318736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.318756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.321997] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.331123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.331561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.331733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.331760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.331776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.332002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.332238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.332259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.332271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.335485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.344734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.345164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.345359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.345385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.345400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.345618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.345848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.345869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.345882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.349206] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.358378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.358812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.359033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.359060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.359075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.359305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.359520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.359541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.359553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.362786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.371910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.372341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.372546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.372571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.372587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.372804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.373045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.436 [2024-05-15 04:25:47.373066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.436 [2024-05-15 04:25:47.373079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.436 [2024-05-15 04:25:47.376329] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.436 [2024-05-15 04:25:47.385494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.436 [2024-05-15 04:25:47.385948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.386131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-05-15 04:25:47.386157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.436 [2024-05-15 04:25:47.386172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.436 [2024-05-15 04:25:47.386401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.436 [2024-05-15 04:25:47.386616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.437 [2024-05-15 04:25:47.386637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.437 [2024-05-15 04:25:47.386650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.437 [2024-05-15 04:25:47.389867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.437 [2024-05-15 04:25:47.399093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.437 [2024-05-15 04:25:47.399509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.399686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.399711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.437 [2024-05-15 04:25:47.399726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.437 [2024-05-15 04:25:47.399952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.437 [2024-05-15 04:25:47.400173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.437 [2024-05-15 04:25:47.400194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.437 [2024-05-15 04:25:47.400208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.437 [2024-05-15 04:25:47.403469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:59.437 [2024-05-15 04:25:47.412828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.437 [2024-05-15 04:25:47.413282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.413479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.413504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.437 [2024-05-15 04:25:47.413519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.437 [2024-05-15 04:25:47.413749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.437 [2024-05-15 04:25:47.413992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.437 [2024-05-15 04:25:47.414014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.437 [2024-05-15 04:25:47.414027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.437 [2024-05-15 04:25:47.417348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.437 [2024-05-15 04:25:47.426408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.437 [2024-05-15 04:25:47.426855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.427046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.427073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.437 [2024-05-15 04:25:47.427090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.437 [2024-05-15 04:25:47.427307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.437 [2024-05-15 04:25:47.427528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.437 [2024-05-15 04:25:47.427552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.437 [2024-05-15 04:25:47.427569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.437 [2024-05-15 04:25:47.430947] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:59.437 [2024-05-15 04:25:47.439528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.437 [2024-05-15 04:25:47.440020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.437 [2024-05-15 04:25:47.440455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.440661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-05-15 04:25:47.440686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.437 [2024-05-15 04:25:47.440702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.437 [2024-05-15 04:25:47.440962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.437 [2024-05-15 04:25:47.441184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.437 [2024-05-15 04:25:47.441206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.437 [2024-05-15 04:25:47.441234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.437 [2024-05-15 04:25:47.444507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.437 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:59.695 [2024-05-15 04:25:47.453557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.695 [2024-05-15 04:25:47.454026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.695 [2024-05-15 04:25:47.454229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.695 [2024-05-15 04:25:47.454254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.695 [2024-05-15 04:25:47.454270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.695 [2024-05-15 04:25:47.454527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.695 [2024-05-15 04:25:47.454766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.695 [2024-05-15 04:25:47.454788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.695 [2024-05-15 04:25:47.454801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.696 [2024-05-15 04:25:47.458107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.696 [2024-05-15 04:25:47.467101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.696 [2024-05-15 04:25:47.467807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.696 [2024-05-15 04:25:47.468196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.696 [2024-05-15 04:25:47.468225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.696 [2024-05-15 04:25:47.468245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.696 [2024-05-15 04:25:47.468486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.696 [2024-05-15 04:25:47.468705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.696 [2024-05-15 04:25:47.468726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.696 [2024-05-15 04:25:47.468743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.696 [2024-05-15 04:25:47.471955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.696 Malloc0 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.696 [2024-05-15 04:25:47.480779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:59.696 [2024-05-15 04:25:47.481340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.696 [2024-05-15 04:25:47.481578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.696 [2024-05-15 04:25:47.481603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.696 [2024-05-15 04:25:47.481624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.696 [2024-05-15 04:25:47.481849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.696 [2024-05-15 04:25:47.482086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.696 [2024-05-15 04:25:47.482109] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.696 [2024-05-15 04:25:47.482126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.696 [2024-05-15 04:25:47.485493] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:59.696 [2024-05-15 04:25:47.494563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.696 [2024-05-15 04:25:47.495007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.696 [2024-05-15 04:25:47.495190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.696 [2024-05-15 04:25:47.495215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4990 with addr=10.0.0.2, port=4420 00:24:59.696 [2024-05-15 04:25:47.495231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4990 is same with the state(5) to be set 00:24:59.696 [2024-05-15 04:25:47.495449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4990 (9): Bad file descriptor 00:24:59.696 [2024-05-15 04:25:47.495679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.696 [2024-05-15 04:25:47.495699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.696 [2024-05-15 04:25:47.495712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:59.696 [2024-05-15 04:25:47.499055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.696 [2024-05-15 04:25:47.499796] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:59.696 [2024-05-15 04:25:47.500088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.696 04:25:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3487765 00:24:59.696 [2024-05-15 04:25:47.508319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.696 [2024-05-15 04:25:47.583300] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:09.658 00:25:09.658 Latency(us) 00:25:09.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.658 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.658 Verification LBA range: start 0x0 length 0x4000 00:25:09.658 Nvme1n1 : 15.01 6501.98 25.40 10549.25 0.00 7480.52 1407.81 18058.81 00:25:09.658 =================================================================================================================== 00:25:09.658 Total : 6501.98 25.40 10549.25 0.00 7480.52 1407.81 18058.81 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.658 rmmod nvme_tcp 00:25:09.658 rmmod nvme_fabrics 00:25:09.658 rmmod nvme_keyring 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3488431 ']' 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3488431 00:25:09.658 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3488431 ']' 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3488431 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488431 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488431' 00:25:09.659 killing process with pid 3488431 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3488431 00:25:09.659 [2024-05-15 04:25:56.247131] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3488431 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.659 04:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.596 04:25:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.596 00:25:10.596 real 0m23.231s 00:25:10.596 user 1m1.450s 00:25:10.596 sys 0m4.566s 00:25:10.596 04:25:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:10.596 04:25:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.596 ************************************ 00:25:10.596 END TEST nvmf_bdevperf 00:25:10.596 ************************************ 00:25:10.855 04:25:58 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:10.855 04:25:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:10.855 04:25:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:10.855 04:25:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.855 ************************************ 00:25:10.855 START TEST nvmf_target_disconnect 00:25:10.855 ************************************ 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:10.855 * Looking for test storage... 00:25:10.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.855 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.856 04:25:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.392 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:13.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:25:13.393 00:25:13.393 --- 10.0.0.2 ping statistics --- 00:25:13.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.393 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:25:13.393 00:25:13.393 --- 10.0.0.1 ping statistics --- 00:25:13.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.393 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:13.393 ************************************ 00:25:13.393 START TEST nvmf_target_disconnect_tc1 00:25:13.393 ************************************ 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:25:13.393 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:13.393 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.651 [2024-05-15 04:26:01.434138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.651 [2024-05-15 04:26:01.434467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.651 [2024-05-15 04:26:01.434498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd4d60 with addr=10.0.0.2, port=4420 00:25:13.651 [2024-05-15 04:26:01.434544] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:13.651 [2024-05-15 04:26:01.434569] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:13.651 [2024-05-15 04:26:01.434584] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:13.651 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:13.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:13.651 Initializing NVMe Controllers 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:25:13.651 00:25:13.651 real 0m0.106s 00:25:13.651 user 0m0.042s 00:25:13.651 sys 0m0.063s 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:13.651 ************************************ 00:25:13.651 END TEST nvmf_target_disconnect_tc1 00:25:13.651 ************************************ 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:13.651 ************************************ 00:25:13.651 START TEST nvmf_target_disconnect_tc2 00:25:13.651 ************************************ 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3491988 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3491988 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3491988 ']' 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:13.651 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.652 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:13.652 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.652 [2024-05-15 04:26:01.543269] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:25:13.652 [2024-05-15 04:26:01.543351] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.652 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.652 [2024-05-15 04:26:01.617656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.910 [2024-05-15 04:26:01.726305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.910 [2024-05-15 04:26:01.726356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.910 [2024-05-15 04:26:01.726384] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.910 [2024-05-15 04:26:01.726395] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.910 [2024-05-15 04:26:01.726405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.910 [2024-05-15 04:26:01.726489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:13.910 [2024-05-15 04:26:01.726522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:13.910 [2024-05-15 04:26:01.726580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:13.910 [2024-05-15 04:26:01.726582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 Malloc0 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 [2024-05-15 04:26:01.893741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.910 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 [2024-05-15 04:26:01.921761] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:13.910 [2024-05-15 04:26:01.922041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=3492014 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.168 04:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:25:14.168 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.074 04:26:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 3491988 00:25:16.074 04:26:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.074 Read completed with error (sct=0, sc=8) 00:25:16.074 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 [2024-05-15 04:26:03.947175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 [2024-05-15 04:26:03.947497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 [2024-05-15 04:26:03.947811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Read completed with error (sct=0, sc=8) 00:25:16.075 starting I/O failed 00:25:16.075 Write completed with error (sct=0, sc=8) 00:25:16.076 starting I/O failed 00:25:16.076 [2024-05-15 04:26:03.948159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.076 [2024-05-15 04:26:03.948449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.948698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.948726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.948900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.949086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.949113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.949299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.949505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.949532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.949731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.949953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.949979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.950184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.950402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.950426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.950622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.950818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.950843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.951054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.951231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.951256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.951452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.951750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.951807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.952042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.952212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.952237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.952488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.952703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.952743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.952935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.953113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.953138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.953344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.953706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.953760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.953958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.954148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.954173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.954405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.954621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.954654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.954877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.955061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.955086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.955275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.955665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.955724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.955949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.956120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.956144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.956348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.956573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.956597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.956830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.957034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.957060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.957237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.957479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.957506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.957853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.958081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.958106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.958279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.958491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.958514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.958778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.958975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.959001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.959165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.960430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.960486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.960720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.960951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.960995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.961195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.961391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.961416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.961636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.961856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.961881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.962060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.962251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.962276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.076 [2024-05-15 04:26:03.962549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.962712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.076 [2024-05-15 04:26:03.962736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.076 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.962960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.963135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.963159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.963355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.963568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.963595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.963810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.963978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.964003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.964180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.964400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.964429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.964657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.964827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.964866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.965079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.965281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.965335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.965583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.965745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.965770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.965970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.966147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.966172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.966400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.966587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.966673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.966889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.967097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.967122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.967295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.967485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.967509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.967702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.967915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.967959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.968154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.968361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.968386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.968615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.968963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.968989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.969159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.969378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.969402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.969607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.969802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.969826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.969997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.970196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.970220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.970423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.970609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.970636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.970820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.971046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.971086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.971278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.971516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.971540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.971725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.971979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.972004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.972181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.972372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.972413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.972628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.972866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.972894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.973106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.973303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.973328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.973528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.973808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.973833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.974034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.974229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.974259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.974472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.974668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.974691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.974911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.975161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.975187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.975368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.975557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.975581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.077 [2024-05-15 04:26:03.975760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.975953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.077 [2024-05-15 04:26:03.975978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.077 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.976177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.976369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.976393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.976590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.976776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.976803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.977014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.977193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.977218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.977467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.977646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.977673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.977865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.978088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.978113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.978289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.978459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.978484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.978705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.978936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.978961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.979139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.979364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.979392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.979586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.979808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.979836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.980086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.980302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.980352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.980599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.980761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.980785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.980987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.981220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.981244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.981409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.981654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.981703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.981926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.982113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.982138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.982339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.982548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.982575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.982785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.983078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.983113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.983318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.983511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.983536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.983713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.983913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.983947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.984174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.984370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.984394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.984590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.984756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.984780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.984998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.985189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.985215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.985425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.985671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.985696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.985891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.986065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.986090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.986257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.986426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.986451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.986650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.986862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.986889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.987123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.987324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.987348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.987528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.987733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.987760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.987989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.988158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.988183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.078 [2024-05-15 04:26:03.988444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.988662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.078 [2024-05-15 04:26:03.988686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.078 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.988884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.989081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.989107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.989281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.989536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.989588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.989835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.990026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.990053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.990298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.990525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.990550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.990745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.990968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.990993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.991202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.991372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.991412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.991629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.991831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.991855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.992026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.992219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.992244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.992458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.992671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.992698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.992953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.993152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.993176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.993373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.993616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.993666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.993905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.994095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.994123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.994339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.994535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.994559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.994727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.994923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.994953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.995155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.995570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.995627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.995845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.996047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.996072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.996297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.996524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.996548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.996787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.997016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.997043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.997295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.997485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.997512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.997720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.997912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.997958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.998162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.998390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.998415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.079 qpair failed and we were unable to recover it. 00:25:16.079 [2024-05-15 04:26:03.998609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.998778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.079 [2024-05-15 04:26:03.998802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:03.998996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:03.999219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:03.999244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:03.999471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:03.999729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:03.999753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:03.999938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.000136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.000161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.000355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.000554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.000578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.000795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.001023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.001050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.001281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.001511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.001536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.001759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.001977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.002003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.002177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.002373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.002398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.002663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.002883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.002908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.003143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.003337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.003362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.003603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.003835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.003859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.004037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.004229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.004257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.004499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.004721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.004746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.004917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.005121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.005146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.005343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.005530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.005555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.005792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.005989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.006019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.006240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.006449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.006476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.006694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.006944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.006972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.007215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.007427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.007454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.007698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.007889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.007914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.008116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.008312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.008336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.008546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.008851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.008875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.009075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.009238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.009262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.009481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.009678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.009703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.009896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.010071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.010096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.010262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.010468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.010496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.010750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.010980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.011005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.011184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.011406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.011431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.011641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.011831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.011858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.080 qpair failed and we were unable to recover it. 00:25:16.080 [2024-05-15 04:26:04.012081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.080 [2024-05-15 04:26:04.012289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.012317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.012563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.012780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.012807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.012992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.013178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.013206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.013420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.013607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.013631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.013855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.014027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.014053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.014270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.014509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.014534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.014728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.014918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.014952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.015154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.015343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.015368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.015561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.015782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.015807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.015985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.016150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.016175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.016347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.016559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.016587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.016808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.016987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.017012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.017182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.017379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.017404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.017575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.017773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.017798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.017957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.018144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.018172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.018392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.018590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.018615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.018789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.019021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.019047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.019338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.019569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.019597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.019805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.020083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.020110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.020354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.020554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.020579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.020806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.020991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.021021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.021299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.021583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.021611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.021856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.022117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.022144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.022393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.022640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.022669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.022892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.023070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.023096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.023293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.023463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.023490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.023692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.023943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.023987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.024215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.024437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.024462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.024632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.024809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.024835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.081 [2024-05-15 04:26:04.025080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.025284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.081 [2024-05-15 04:26:04.025310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.081 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.025478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.025691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.025720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.025968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.026172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.026197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.026438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.026634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.026659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.026829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.027025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.027051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.027272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.027467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.027491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.027878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.027903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.028115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.028292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.028316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.028507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.028726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.028755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.029013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.029188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.029213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.029417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.029637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.029665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.029876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.030113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.030139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.030328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.030520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.030545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.030781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.030983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.031009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.031183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.031420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.031465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.031686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.031845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.031869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.032076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.032245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.032270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.032440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.032632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.032658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.032882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.033119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.033145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.033354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.033722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.033771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.033987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.034183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.034208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.034433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.034700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.034724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.034957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.035128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.035153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.035439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.035917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.035981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.036196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.036493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.036517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.036727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.036947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.036977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.037188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.037447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.037472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.037661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.037860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.037884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.038098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.038296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.038320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.082 [2024-05-15 04:26:04.038530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.038722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.082 [2024-05-15 04:26:04.038747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.082 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.038944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.039121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.039146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.039357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.039563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.039590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.039810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.040037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.040062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.040281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.040464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.040490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.040700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.040874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.040901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.041131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.041346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.041370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.041593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.041811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.041838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.042084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.042243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.042268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.042442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.042713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.042738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.042965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.043181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.043209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.043446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.043641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.043666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.043862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.044068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.044093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.044338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.044557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.044585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.044768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.044968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.044993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.045191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.045390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.045415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.045625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.045812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.045844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.046061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.046317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.046346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.046592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.046775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.046800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.047030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.047293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.047323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.047535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.047764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.047791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.048026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.048202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.048227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.048447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.048664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.048691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.048912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.049142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.049169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.049400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.049601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.049626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.049847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.050066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.050092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.050321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.050596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.050620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.050830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.051089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.051119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.083 qpair failed and we were unable to recover it. 00:25:16.083 [2024-05-15 04:26:04.051343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.083 [2024-05-15 04:26:04.051596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.051624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.051843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.052059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.052092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.052288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.052540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.052565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.052793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.052984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.053010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.053213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.053411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.053438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.053656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.053863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.053887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.054097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.054325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.054350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.054533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.054738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.054764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.054963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.055257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.055285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.055559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.055783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.055808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.056009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.056215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.056240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.056435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.056616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.056645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.056841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.057062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.057091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.057338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.057551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.057579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.057774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.057995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.058021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.058274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.058489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.058518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.058765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.058990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.059016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.059212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.059437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.059462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.059646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.059857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.059886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.060108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.060324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.060348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.060546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.060805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.060830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.061026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.061198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.061237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.061491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.061738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.061766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.061949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.062196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.062224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.062448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.062682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.062707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.062964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.063155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.063184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.063486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.063869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.084 [2024-05-15 04:26:04.063927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.084 qpair failed and we were unable to recover it. 00:25:16.084 [2024-05-15 04:26:04.064173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.064427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.064454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.064674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.064905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.064936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.065107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.065318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.065346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.065533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.065743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.065772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.065990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.066242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.066268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.066501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.066684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.066709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.066993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.067228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.067257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.067504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.067731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.067759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.067951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.068197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.068225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.068409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.068603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.068633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.069012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.069202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.069227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.069417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.069583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.069607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.069831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.070089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.070117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.070350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.070587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.070627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.070823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.071070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.071122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.071358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.071555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.071579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.071811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.072029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.072057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.072251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.072459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.072482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.072738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.072936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.072976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.073175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.073422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.073450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.073701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.073918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.073968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.074163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.074414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.074443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.074635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.074873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.074900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.075121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.075409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.075434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.075701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.075946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.075975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.076208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.076415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.076440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.076634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.076837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.076865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.077079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.077383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.077412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.077657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.077874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.077900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.085 [2024-05-15 04:26:04.078160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.078353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.085 [2024-05-15 04:26:04.078378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.085 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.078613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.078810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.078839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.079041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.079285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.079313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.079563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.079790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.079814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.080020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.080205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.080233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.080426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.080738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.080789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.080987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.081185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.081214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.081404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.081646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.081670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.081828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.081998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.082025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.086 [2024-05-15 04:26:04.082203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.082452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.086 [2024-05-15 04:26:04.082480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.086 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.082700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.082946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.082974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.083166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.083369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.083395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.083590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.083789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.083815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.084042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.084213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.084247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.084466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.084641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.084665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.084878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.085080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.085105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.085338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.085572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.085596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.085834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.086034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.086060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.086291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.086541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.086566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.086790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.086975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.087017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.087239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.087451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.087479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.087696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.087915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.087949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.088172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.088402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.088427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.088776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.088959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.089001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.089176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.089465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.089490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.089687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.089878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.089902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.358 qpair failed and we were unable to recover it. 00:25:16.358 [2024-05-15 04:26:04.090086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.090322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.358 [2024-05-15 04:26:04.090348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.090574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.090736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.090760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.091020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.091221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.091253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.091462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.091685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.091710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.091974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.092167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.092192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.092417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.092643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.092667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.092951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.093147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.093171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.093404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.093634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.093662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.094480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.094722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.094748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.094962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.095177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.095204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.095428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.095627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.095656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.095924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.096124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.096150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.096372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.096575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.096603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.096830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.097047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.097072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.097268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.097488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.097516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.097767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.097942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.097967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.098141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.098379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.098437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.098681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.098855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.098881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.099099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.099332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.099358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.099583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.099820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.099845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.100048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.100261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.100289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.100514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.100761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.100807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.101047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.101241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.101266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.101562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.101803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.101830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.102054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.102290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.102338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.102648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.102867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.102896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.103132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.103373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.103421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.103644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.103840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.103866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.104049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.104252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.104295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.104512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.104731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.104756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.359 qpair failed and we were unable to recover it. 00:25:16.359 [2024-05-15 04:26:04.104980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.359 [2024-05-15 04:26:04.105176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.105201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.105377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.105571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.105596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.105791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.105985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.106011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.106179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.106381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.106408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.106812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.107098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.107125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.107298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.107532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.107563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.107760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.107983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.108009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.108205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.108462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.108501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.108726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.108952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.108994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.109186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.109406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.109431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.109681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.109878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.109903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.110117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.110361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.110385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.110695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.110908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.110938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.111141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.111315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.111339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.111557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.111785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.111809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.112074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.112266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.112290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.112576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.112845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.112895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.113150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.113330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.113357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.113567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.113906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.113993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.114208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.114403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.114427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.114632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.114880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.114907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.115110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.115367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.115391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.115615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.115830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.115855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.116057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.116243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.116270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.116490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.116699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.116723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.116934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.117136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.117163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.117377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.117562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.117586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.117818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.118007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.118035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.118226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.118415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.118442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.118662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.118920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.360 [2024-05-15 04:26:04.118950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.360 qpair failed and we were unable to recover it. 00:25:16.360 [2024-05-15 04:26:04.119124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.119342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.119367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.119661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.119904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.119959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.120208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.120545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.120595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.120846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.121049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.121074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.121296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.121537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.121564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.121771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.122018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.122046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.122346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.122590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.122617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.122867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.123055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.123080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.123255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.123443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.123469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.123674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.123861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.123888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.124138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.124420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.124476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.124699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.124889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.124920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.125150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.125372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.125399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.125611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.125849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.125873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.126103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.126324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.126348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.126578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.126841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.126868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.127119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.127330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.127354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.127607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.127891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.127947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.128169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.128337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.128360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.128551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.128808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.128832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.129047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.129273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.129323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.129519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.129739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.129764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.129995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.130343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.130393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.130645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.130809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.130834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.131000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.131227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.131251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.131481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.131717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.131741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.132021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.132235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.132259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.132445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.132642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.132667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.132886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.133101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.133129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.361 [2024-05-15 04:26:04.133344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.133616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.361 [2024-05-15 04:26:04.133640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.361 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.133876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.134081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.134105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.134307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.134509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.134533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.134746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.134976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.135000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.135330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.135570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.135594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.135806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.136045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.136086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.136315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.136575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.136598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.136828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.137052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.137077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.137248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.137427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.137450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.137661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.137896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.137923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.138121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.138468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.138523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.138840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.139103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.139128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.139305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.139534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.139558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.139766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.139955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.139979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.140211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.140470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.140493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.140670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.140834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.140860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.141102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.141378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.141431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.141643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.141858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.141886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.142106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.142337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.142361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.142542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.142722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.142747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.142973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.143156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.143183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.143409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.143585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.143609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.143838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.144085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.144112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.144315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.144529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.144556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.144765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.144989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.145017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.362 qpair failed and we were unable to recover it. 00:25:16.362 [2024-05-15 04:26:04.145253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.362 [2024-05-15 04:26:04.145436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.145460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.145671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.145881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.145905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.146144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.146386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.146413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.146631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.146872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.146899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.147121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.147330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.147353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.147588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.147796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.147819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.148001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.148242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.148266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.148480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.148695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.148723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.148944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.149159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.149188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.149415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.149666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.149694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.149912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.150196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.150236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.150424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.150730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.150753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.151045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.151258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.151285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.151498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.151745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.151769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.152000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.152197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.152224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.152440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.152622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.152649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.152870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.153069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.153094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.153258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.153494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.153518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.153751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.153976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.154013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.154239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.154648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.154697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.154889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.155086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.155114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.155358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.155676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.155700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.155922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.156173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.156200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.156441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.156647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.156674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.156899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.157133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.157159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.157427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.157637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.157661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.157866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.158101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.158126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.158304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.158529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.158553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.158787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.158991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.159016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.159232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.159464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.159505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.363 qpair failed and we were unable to recover it. 00:25:16.363 [2024-05-15 04:26:04.159720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.363 [2024-05-15 04:26:04.159943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.159969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.160250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.160466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.160493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.160728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.160972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.161000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.161238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.161557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.161618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.161851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.162023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.162049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.162339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.162513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.162536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.162764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.163094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.163122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.163359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.163582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.163606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.163861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.164085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.164113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.164332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.164514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.164541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.164771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.164979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.165004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.165200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.165401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.165430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.165624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.165844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.165872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.166107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.166289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.166313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.166571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.166940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.167000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.167242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.167598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.167621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.167834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.168049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.168076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.168326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.168557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.168580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.168805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.169000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.169028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.169262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.169494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.169535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.169756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.170039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.170064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.170267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.170472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.170500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.170743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.170958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.170997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.171250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.171439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.171464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.171730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.171961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.172000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.172223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.172515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.172543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.172789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.173010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.173034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.173260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.173473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.173500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.173730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.173942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.173971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.364 [2024-05-15 04:26:04.174193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.174425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.364 [2024-05-15 04:26:04.174453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.364 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.174695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.174956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.174984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.175221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.175454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.175494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.175708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.175998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.176027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.176267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.176504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.176531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.176779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.177021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.177049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.177272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.177516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.177543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.177766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.178026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.178051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.178276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.178488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.178517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.178775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.179040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.179079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.179274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.179497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.179521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.179747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.179986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.180027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.180271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.180522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.180549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.180765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.180944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.180972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.181229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.181513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.181537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.181753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.181949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.181974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.182200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.182385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.182412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.182705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.183064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.183089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.183300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.183510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.183539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.183838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.184064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.184090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.184305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.184467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.184506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.184713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.184926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.184960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.185166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.185370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.185398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.185621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.185844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.185869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.186115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.186350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.186374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.186576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.186762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.186786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.187031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.187262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.187286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.187492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.187816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.187876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.188104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.188294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.188321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.188570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.188845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.188869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.365 [2024-05-15 04:26:04.189118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.189356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.365 [2024-05-15 04:26:04.189386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.365 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.189628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.189858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.189882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.190141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.190379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.190402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.190659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.190886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.190913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.191145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.191390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.191417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.191600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.191766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.191791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.191966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.192318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.192371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.192622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.192849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.192877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.193117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.193486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.193538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.193843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.194102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.194130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.194379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.194595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.194622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.194841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.195034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.195062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.195280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.195515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.195539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.195839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.196071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.196097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.196292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.196539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.196567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.196753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.196957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.196985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.197208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.197449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.197473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.197681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.197892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.197919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.198141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.198366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.198391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.198586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.198814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.198837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.199085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.199279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.199304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.199512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.199713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.199743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.200008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.200233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.200261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.200483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.200721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.200745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.200976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.201168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.201197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.201387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.201559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.201584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.201781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.202001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.202026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.202209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.202425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.202453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.202696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.202878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.202906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.203124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.203468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.203529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.366 qpair failed and we were unable to recover it. 00:25:16.366 [2024-05-15 04:26:04.203723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.203943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.366 [2024-05-15 04:26:04.203971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.204159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.204403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.204455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.204641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.205007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.205036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.205258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.205490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.205530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.205743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.205956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.205985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.206234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.206561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.206628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.206853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.207094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.207122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.207343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.207513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.207537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.207702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.207889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.207914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.208117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.208518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.208576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.208817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.209058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.209087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.209282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.209497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.209525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.209737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.209964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.209989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.210189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.210401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.210429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.210675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.210917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.210951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.211139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.211329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.211358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.211575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.211927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.212003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.212220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.212463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.212490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.212678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.212895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.212922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.213184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.213505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.213561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.213803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.214012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.214040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.214256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.214440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.214467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.214686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.214904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.214941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.215146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.215335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.215359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.215607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.215913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.215970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.216184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.216534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.367 [2024-05-15 04:26:04.216591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.367 qpair failed and we were unable to recover it. 00:25:16.367 [2024-05-15 04:26:04.216813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.217036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.217061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.217264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.217457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.217482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.217669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.217863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.217890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.218092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.218279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.218308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.218556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.218749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.218776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.219041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.219233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.219261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.219437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.219655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.219683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.219936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.220123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.220151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.220375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.220602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.220628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.220850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.221020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.221045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.221239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.221458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.221483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.221696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.221915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.221962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.222178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.222399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.222424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.222645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.222835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.222862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.223092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.223423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.223489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.223705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.223917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.223951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.224175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.224358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.224390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.224666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.224885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.224913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.225154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.225422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.225449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.225633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.225828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.225856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.226071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.226300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.226324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.226524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.226698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.226723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.226917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.227089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.227116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.227339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.227530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.227558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.227750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.227928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.227964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.228159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.228371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.228399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.228619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.228827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.228885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.229124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.229339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.229366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.229606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.229850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.229874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.368 [2024-05-15 04:26:04.230095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.230358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.368 [2024-05-15 04:26:04.230406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.368 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.230596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.230818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.230843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.231068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.231319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.231344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.231520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.231684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.231709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.231939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.232114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.232139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.232336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.232507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.232531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.232747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.232965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.232993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.233235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.233424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.233448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.233679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.233880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.233907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.234115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.234360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.234384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.234587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.234802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.234829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.235046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.235248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.235273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.235475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.235709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.235735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.235940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.236159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.236186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.236437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.236660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.236684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.236919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.237140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.237165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.237373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.237772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.237825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.238051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.238222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.238247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.238435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.238657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.238684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.238928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.239139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.239163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.239362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.239579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.239607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.239786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.239975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.240004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.240222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.240401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.240430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.240667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.240873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.240899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.241105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.241358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.241385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.241616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.241859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.241886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.242142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.242380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.242407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.242596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.242853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.242878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.243075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.243277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.243304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.243519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.243822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.243847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.369 [2024-05-15 04:26:04.244086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.244345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.369 [2024-05-15 04:26:04.244397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.369 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.244645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.244864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.244889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.245107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.245321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.245346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.245530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.245896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.245950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.246174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.246449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.246477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.246696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.246913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.246948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.247143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.247366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.247416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.247637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.247878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.247903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.248102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.248336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.248366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.248550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.248733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.248760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.249056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.249229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.249253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.249453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.249668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.249697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.249924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.250141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.250169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.250386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.250731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.250785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.251025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.251273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.251297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.251468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.251696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.251723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.251908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.252159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.252187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.252386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.252633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.252658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.252852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.253047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.253077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.253277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.253526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.253574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.253763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.254006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.254035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.254228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.254468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.254495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.254684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.254879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.254904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.255073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.255309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.255334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.255559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.255955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.256012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.256258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.256465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.256492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.256689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.256937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.256965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.257148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.257399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.257424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.257678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.257937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.257962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.258163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.258388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.258415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.370 qpair failed and we were unable to recover it. 00:25:16.370 [2024-05-15 04:26:04.258599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.258819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.370 [2024-05-15 04:26:04.258846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.259052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.259250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.259282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.259538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.259756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.259781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.259982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.260163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.260191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.260409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.260624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.260651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.260841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.261064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.261089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.261310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.261524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.261551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.261729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.261953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.261981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.262183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.262349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.262375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.262580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.262786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.262814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.263039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.263209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.263252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.263445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.263666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.263690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.263887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.264115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.264143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.264334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.264580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.264608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.264799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.265039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.265064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.265246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.265502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.265529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.265748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.265971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.266000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.266185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.266590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.266646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.266867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.267064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.267093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.267368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.267583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.267611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.267851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.268105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.268135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.268329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.268542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.268570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.268790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.269034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.269063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.269283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.269462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.269489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.269710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.269874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.269898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.371 qpair failed and we were unable to recover it. 00:25:16.371 [2024-05-15 04:26:04.270181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.270431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.371 [2024-05-15 04:26:04.270458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.270793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.271044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.271070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.271292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.271503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.271530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.271745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.271951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.271978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.272167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.272369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.272398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.272645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.272866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.272891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.273104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.273305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.273330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.273529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.273689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.273732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.273965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.274163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.274189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.274411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.274658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.274686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.274876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.275097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.275124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.275348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.275569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.275596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.275806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.276038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.276065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.276290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.276478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.276507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.276731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.276985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.277029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.277209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.277422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.277450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.277695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.277882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.277907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.278114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.278313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.278340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.278561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.278750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.278777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.279001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.279200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.279241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.279430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.279653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.279680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.279890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.280089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.280115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.280314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.280554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.280582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.280831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.281032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.281058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.281222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.281455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.281483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.281663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.281880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.281907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.282125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.282297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.282321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.282522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.282761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.282788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.283024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.283200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.283243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.283492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.283699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.283727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.372 qpair failed and we were unable to recover it. 00:25:16.372 [2024-05-15 04:26:04.283982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.372 [2024-05-15 04:26:04.284177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.284201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.284434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.284693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.284720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.284917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.285114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.285139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.285330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.285517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.285545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.285770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.285986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.286028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.286242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.286458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.286486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.286701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.286913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.286947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.287153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.287380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.287405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.287591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.287780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.287808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.288028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.288243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.288271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.288488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.288836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.288885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.289110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.289359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.289386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.289608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.289783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.289811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.290037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.290234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.290262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.290490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.290712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.290741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.291002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.291246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.291275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.291457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.291674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.291701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.291927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.292124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.292149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.292345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.292576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.292603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.292824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.293072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.293100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.293356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.293647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.293707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.293951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.294173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.294202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.294453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.294657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.294682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.294875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.295053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.295079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.295329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.295694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.295753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.295985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.296187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.296228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.296444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.296635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.296663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.296907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.297135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.297161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.297376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.297588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.297615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.373 qpair failed and we were unable to recover it. 00:25:16.373 [2024-05-15 04:26:04.297834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.298005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.373 [2024-05-15 04:26:04.298032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.298225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.298447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.298474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.298779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.298985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.299014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.299240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.299480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.299508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.299756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.299920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.299952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.300177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.300383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.300421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.300672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.300919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.300953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.301144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.301324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.301352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.301542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.301729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.301752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.302012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.302237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.302265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.302483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.302695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.302724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.302953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.303140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.303167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.303396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.303600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.303625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.303798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.303991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.304021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.304245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.304456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.304483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.304673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.304883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.304917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.305118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.305311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.305336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.305530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.305717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.305746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.305943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.306124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.306152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.306378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.306622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.306649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.306868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.307086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.307114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.307313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.307527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.307555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.307768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.308021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.308051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.308233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.308420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.308447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.308656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.308868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.308897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.309092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.309342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.309367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.309668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.309889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.309914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.310112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.310333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.310358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.310524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.310742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.310771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.311025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.311272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.311300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.374 qpair failed and we were unable to recover it. 00:25:16.374 [2024-05-15 04:26:04.311486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.374 [2024-05-15 04:26:04.311705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.311733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.311918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.312114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.312142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.312366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.312539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.312564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.312762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.312976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.313005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.313191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.313402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.313430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.313682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.313889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.313916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.314178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.314440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.314489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.314721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.314901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.314945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.315136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.315350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.315378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.315618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.315838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.315863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.316053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.316273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.316303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.316549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.316739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.316767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.316950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.317160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.317189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.317407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.317597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.317625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.317817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.318015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.318040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.318236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.318424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.318453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.318643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.318832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.318861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.319089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.319261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.319286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.319472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.319670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.319695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.319859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.320081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.320109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.320293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.320508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.320533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.320788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.321012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.321042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.321257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.321484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.321511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.321762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.321996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.322021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.322220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.322432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.322459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.322645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.322891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.322915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.323099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.323275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.323300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.323485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.323707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.323735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.323960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.324206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.324231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.375 [2024-05-15 04:26:04.324428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.324627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.375 [2024-05-15 04:26:04.324654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.375 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.324871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.325093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.325122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.325337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.325521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.325545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.325727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.325920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.325950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.326149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.326364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.326389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.326581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.326799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.326828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.327075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.327319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.327347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.327564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.327807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.327835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.328038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.328254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.328282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.328522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.328741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.328769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.328981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.329202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.329228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.329419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.329640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.329667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.329917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.330137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.330165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.330384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.330595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.330623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.330816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.331012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.331038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.331283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.331687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.331733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.331980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.332146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.332171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.332336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.332578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.332605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.332784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.333027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.333055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.333268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.333509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.333536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.333724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.333971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.334000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.334190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.334382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.334412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.334606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.334771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.334813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.335004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.335210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.335238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.376 [2024-05-15 04:26:04.335450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.335666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.376 [2024-05-15 04:26:04.335693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.376 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.335907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.336131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.336161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.336377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.336580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.336607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.336830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.337049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.337075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.337264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.337440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.337467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.337648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.337866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.337896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.338095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.338314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.338339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.338538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.338706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.338730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.338901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.339073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.339099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.339298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.339526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.339554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.339779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.340045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.340071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.340292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.340480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.340508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.340716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.340939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.340968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.341159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.341345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.341372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.341618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.341796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.341823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.342052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.342271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.342299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.342541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.342767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.342795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.343024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.343249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.343277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.343495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.343674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.343702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.343894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.344093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.344122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.344304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.344521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.344550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.344742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.344958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.344987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.345178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.345367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.345396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.345640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.345861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.345889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.346124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.346313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.346338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.346535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.346777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.346804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.347023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.347230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.347255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.347469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.347660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.347688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.347935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.348155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.348183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.348380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.348742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.348810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.377 qpair failed and we were unable to recover it. 00:25:16.377 [2024-05-15 04:26:04.349055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.377 [2024-05-15 04:26:04.349249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.349278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.349528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.349873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.349924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.350152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.350337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.350366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.350557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.350757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.350782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.351017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.351239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.351268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.351455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.351645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.351675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.351866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.352079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.352108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.352308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.352494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.352522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.352714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.352927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.352961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.353147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.353364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.353392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.353574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.353815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.353843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.354056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.354256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.354283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.354502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.354729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.354754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.354948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.355176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.355201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.355403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.355600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.355627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.355823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.356032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.356060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.356245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.356463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.356490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.356851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.357072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.357100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.357319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.357533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.357560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.357760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.357923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.357953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.358154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.358343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.358371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.358619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.358817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.378 [2024-05-15 04:26:04.358844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.378 qpair failed and we were unable to recover it. 00:25:16.378 [2024-05-15 04:26:04.359041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.359258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.359286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.359505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.359708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.359736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.359950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.360135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.360160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.360380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.360609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.360633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.360852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.361045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.361071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.361268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.361445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.361473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.361690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.361903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.361935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.362184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.362392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.362420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.362632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.362903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.362941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.655 [2024-05-15 04:26:04.363172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.363366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.655 [2024-05-15 04:26:04.363393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.655 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.363603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.363816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.363843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.364051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.364283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.364313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.364488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.364706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.364734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.364959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.365211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.365239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.365459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.365652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.365679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.365866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.366056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.366085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.366304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.366512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.366540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.366756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.366948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.366978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.367205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.367370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.367412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.367628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.367816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.367844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.368058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.368234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.368262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.368509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.368737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.368768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.368962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.369157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.369184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.369431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.369650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.369680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.369876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.370095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.370124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.370338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.370508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.370533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.370774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.370983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.371012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.371258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.371446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.371474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.371723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.371943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.371972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.372160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.372382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.372407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.372606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.372957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.373013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.373231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.373416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.373449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.373694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.373911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.373959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.374185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.374389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.374413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.374612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.374837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.374861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.375079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.375289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.375316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.375527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.375765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.375790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.375993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.376187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.376212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.376438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.376648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.376676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.656 qpair failed and we were unable to recover it. 00:25:16.656 [2024-05-15 04:26:04.376890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.656 [2024-05-15 04:26:04.377090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.377118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.377363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.377574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.377598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.377822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.378014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.378043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.378260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.378473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.378501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.378713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.378904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.378940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.379190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.379598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.379653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.379851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.380099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.380128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.380308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.380533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.380558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.380808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.381077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.381106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.381325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.381538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.381566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.381786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.382072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.382100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.382295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.382511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.382538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.382721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.382960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.382988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.383239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.383436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.383460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.383654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.383883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.383910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.384132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.384317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.384346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.384566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.384797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.384821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.385045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.385236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.385264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.385511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.385682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.385708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.385954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.386166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.386194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.386442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.386634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.386658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.386855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.387093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.387118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.387287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.387538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.387565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.387785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.388013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.388041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.388259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.388509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.388533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.388784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.388969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.388999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.389215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.389462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.389489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.389712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.389915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.389951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.390141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.390358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.390386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.390572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.390807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.657 [2024-05-15 04:26:04.390835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.657 qpair failed and we were unable to recover it. 00:25:16.657 [2024-05-15 04:26:04.391027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.391224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.391249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.391412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.391628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.391655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.391876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.392094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.392122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.392350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.392593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.392618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.392816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.393040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.393068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.393257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.393479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.393507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.393754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.393974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.393999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.394194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.394423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.394451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.394650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.394834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.394863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.395092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.395273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.395300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.395527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.395747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.395777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.395998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.396183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.396218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.396415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.396610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.396635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.396866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.397059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.397088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.397305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.397516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.397544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.397729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.397952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.397983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.398201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.398382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.398410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.398618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.398818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.398846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.399073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.399274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.399299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.399480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.399667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.399696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.399915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.400088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.400113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.400278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.400518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.400545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.400707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.400896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.400921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.401141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.401330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.401358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.401591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.401808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.401835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.402066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.402252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.402280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.402532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.402693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.402717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.402901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.403110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.403138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.403334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.403516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.403545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.658 qpair failed and we were unable to recover it. 00:25:16.658 [2024-05-15 04:26:04.403762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.658 [2024-05-15 04:26:04.403958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.403986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.404175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.404391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.404420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.404606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.404794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.404821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.405064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.405252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.405280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.405506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.405688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.405716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.405943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.406140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.406167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.406406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.406627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.406652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.406819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.407042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.407071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.407290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.407543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.407571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.407879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.408160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.408188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.408388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.408576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.408604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.408823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.409032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.409060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.409280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.409505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.409530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.409729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.409958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.409986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.410226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.410457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.410487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.410712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.410904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.410928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.411145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.411389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.411414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.411606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.411845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.411903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.412145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.412345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.412376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.412628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.412964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.412995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.413212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.413519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.413571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.413761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.413990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.414016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.414251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.414466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.414490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.414715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.414906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.414939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.415181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.415412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.415440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.415690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.415875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.415902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.416123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.416348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.416376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.416594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.416788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.416833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.417039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.417275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.417302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.659 qpair failed and we were unable to recover it. 00:25:16.659 [2024-05-15 04:26:04.417549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.417740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.659 [2024-05-15 04:26:04.417765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.418026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.418201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.418247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.418433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.418649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.418697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.418940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.419133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.419158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.419404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.419771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.419818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.420048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.420262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.420295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.420537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.420815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.420860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.421090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.421431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.421498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.421752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.421970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.422012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.422182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.422440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.422465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.422636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.422878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.422906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.423194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.423465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.423510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.423769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.424019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.424044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.424260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.424593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.424650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.424861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.425048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.425073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.425273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.425470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.425494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.425674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.425898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.425926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.426148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.426322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.426346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.426538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.426827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.426855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.427067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.427256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.427284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.427614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.427840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.427866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.428067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.428269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.428295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.428468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.428658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.428683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.428884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.429084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.429112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.429332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.429514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.660 [2024-05-15 04:26:04.429542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.660 qpair failed and we were unable to recover it. 00:25:16.660 [2024-05-15 04:26:04.429762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.430017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.430045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.430258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.430507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.430536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.430749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.430961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.430987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.431180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.431392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.431420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.431637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.431806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.431831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.432056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.432281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.432309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.432518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.432745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.432797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.433039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.433255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.433283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.433501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.433819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.433847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.434075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.434341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.434385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.434601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.434844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.434871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.435063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.435358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.435409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.435586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.435887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.436022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.436257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.436447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.436474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.436702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.436943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.436971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.437162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.437397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.437425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.437675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.437921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.437954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.438163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.438357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.438402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.438642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.438855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.438880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.439082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.439309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.439336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.439521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.439734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.439762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.439973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.440168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.440196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.440422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.440682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.440707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.440900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.441124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.441152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.441374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.441646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.441673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.441890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.442085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.442113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.442328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.442513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.442540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.442715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.442926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.442961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.443168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.443349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.443376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.443559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.443814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.661 [2024-05-15 04:26:04.443864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.661 qpair failed and we were unable to recover it. 00:25:16.661 [2024-05-15 04:26:04.444106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.444328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.444353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.444565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.444836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.444868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.445051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.445270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.445297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.445518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.445717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.445741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.445965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.446168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.446195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.446476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.446687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.446715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.446938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.447129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.447156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.447339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.447518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.447542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.447712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.447958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.447986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.448182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.448400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.448428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.448642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.448862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.448886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.449084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.449405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.449471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.449711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.449961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.449989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.450227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.450446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.450473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.450672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.450916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.450948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.451189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.451424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.451469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.451720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.451910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.451952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.452185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.452395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.452422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.452616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.452829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.452857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.453104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.453266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.453291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.453480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.453697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.453725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.453949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.454159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.454187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.454410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.454625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.454649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.454874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.455138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.455166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.455378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.455705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.455760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.455977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.456191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.456218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.456428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.456635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.456660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.456861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.457084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.457114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.457331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.457577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.457605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.457785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.458057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.662 [2024-05-15 04:26:04.458085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.662 qpair failed and we were unable to recover it. 00:25:16.662 [2024-05-15 04:26:04.458316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.458501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.458529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.458721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.458907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.458944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.459176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.459471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.459520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.459815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.460000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.460025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.460227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.460505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.460551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.460768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.460961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.460987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.461179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.461505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.461563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.461833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.462041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.462069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.462294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.462514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.462559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.462775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.462971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.463001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.463246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.463586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.463637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.463926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.464183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.464207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.464428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.464701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.464749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.464981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.465182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.465207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.465401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.465689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.465739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.465954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.466135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.466162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.466377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.466623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.466653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.466858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.467046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.467073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.467259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.467494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.467551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.467953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.468198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.468223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.468513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.468711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.468736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.468943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.469113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.469138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.469337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.469537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.469564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.469772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.469983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.470012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.470205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.470426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.470455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.470678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.470896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.470924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.471137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.471314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.471341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.471668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.471923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.471956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.472204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.472390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.472417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.663 [2024-05-15 04:26:04.472614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.472833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.663 [2024-05-15 04:26:04.472857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.663 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.473061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.473293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.473338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.473596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.473833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.473885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.474148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.474370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.474402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.474615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.474821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.474846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.475042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.475224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.475251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.475575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.475839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.475868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.476085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.476298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.476323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.476520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.476729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.476754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.477008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.477188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.477216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.477466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.477755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.477783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.478035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.478248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.478303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.478552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.478804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.478850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.479048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.479254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.479281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.479506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.479695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.479720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.479905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.480123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.480151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.480336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.480515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.480542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.480788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.481002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.481029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.481279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.481639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.481694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.481923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.482165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.482192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.482437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.482764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.482820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.483042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.483231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.483256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.483528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.483854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.483881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.484135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.484309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.484334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.664 qpair failed and we were unable to recover it. 00:25:16.664 [2024-05-15 04:26:04.484507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.664 [2024-05-15 04:26:04.484687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.484751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.484969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.485182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.485206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.485402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.485646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.485670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.485885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.486118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.486146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.486341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.486550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.486581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.486825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.487048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.487077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.487318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.487510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.487538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.487758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.487985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.488011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.488224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.488499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.488550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.488788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.489014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.489042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.489171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6d0b0 is same with the state(5) to be set 00:25:16.665 [2024-05-15 04:26:04.489533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.489757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.489806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.490013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.490231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.490259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.490444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.490666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.490693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.490888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.491126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.491153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.491353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.491517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.491542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.491725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.491944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.491973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.492211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.492430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.492458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.492684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.492876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.492902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.493096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.493285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.493312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.493529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.493758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.493803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.494050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.494219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.494244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.494468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.494687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.494712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.494958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.495139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.495164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.495355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.495537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.495565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.495818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.496020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.496045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.496265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.496453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.496482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.496676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.496917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.496951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.497165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.497365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.497394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.497605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.497844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.497869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.665 qpair failed and we were unable to recover it. 00:25:16.665 [2024-05-15 04:26:04.498074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.665 [2024-05-15 04:26:04.498262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.498292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.498559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.498722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.498747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.498917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.499112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.499137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.499331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.499495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.499519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.499741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.499967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.499992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.500189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.500445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.500472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.500736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.500909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.500938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.501106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.501296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.501324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.501574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.501798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.501827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.502060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.502285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.502313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.502488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.502829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.502884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.503151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.503528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.503582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.503777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.504041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.504222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.504387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.504412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.504595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.504812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.504840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.505034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.505202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.505245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.505465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.505636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.505660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.505884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.506129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.506155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.506352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.506534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.506561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.506776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.507025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.507052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.507276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.507475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.507504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.507694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.507910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.507945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.508144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.508362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.508390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.508606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.508804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.508832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.509017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.509175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.509220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.509433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.509645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.509672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.509852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.510064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.510092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.510288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.510456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.510483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.510653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.510899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.510927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.666 qpair failed and we were unable to recover it. 00:25:16.666 [2024-05-15 04:26:04.511142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.511359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.666 [2024-05-15 04:26:04.511386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.511607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.511793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.511820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.512035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.512260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.512285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.512506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.512701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.512730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.512952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.513169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.513197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.513446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.513734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.513785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.514008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.514229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.514258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.514472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.514690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.514719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.514938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.515188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.515215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.515443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.515631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.515661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.515883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.516093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.516122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.516346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.516529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.516559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.516803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.517038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.517071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.517258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.517483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.517516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.517735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.517954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.517982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.518174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.518417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.518444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.518657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.518871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.518898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.519095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.519291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.519317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.519533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.519832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.519860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.520085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.520299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.520326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.520539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.520750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.520775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.521012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.521213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.521256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.521439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.521704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.521755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.522021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.522226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.522250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.522472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.522831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.522877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.523117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.523395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.523445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.523688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.523899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.523926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.524156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.524378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.524405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.524627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.524882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.524910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.525160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.525482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.525535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.667 qpair failed and we were unable to recover it. 00:25:16.667 [2024-05-15 04:26:04.525718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.667 [2024-05-15 04:26:04.525903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.525938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.526172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.526501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.526556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.526798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.527025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.527055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.527266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.527462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.527490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.527708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.528016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.528041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.528292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.528571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.528596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.528782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.529000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.529042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.529217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.529390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.529415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.529633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.529827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.529851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.530047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.530305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.530333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.530577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.530773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.530800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.531059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.531299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.531358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.531548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.531757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.531784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.531997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.532181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.532206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.532398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.532640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.532684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.532902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.533127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.533152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.533329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.533580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.533628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.533849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.534081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.534107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.534318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.534520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.534544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.534709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.534926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.534963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.535183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.535349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.535374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.535597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.535815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.535844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.536091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.536316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.536345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.536597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.536786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.536811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.537022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.537219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.537246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.537468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.537662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.537686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.537883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.538125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.538154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.538378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.538654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.538679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.538944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.539204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.539232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.539427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.539619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.539648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.668 qpair failed and we were unable to recover it. 00:25:16.668 [2024-05-15 04:26:04.539860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.668 [2024-05-15 04:26:04.540047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.540076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.540266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.540426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.540452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.540678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.540890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.540918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.541119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.541326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.541354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.541537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.541740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.541765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.541963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.542155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.542184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.542372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.542581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.542608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.542805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.543092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.543121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.543350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.543621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.543669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.543918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.544118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.544142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.544343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.544556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.544583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.544823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.545049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.545074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.545243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.545462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.545490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.545732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.545958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.545987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.546205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.546397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.546424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.546617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.546847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.546872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.547079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.547270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.547299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.547517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.547691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.547717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.547916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.548145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.548173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.548392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.548613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.548641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.548862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.549065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.549092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.549348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.549563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.549591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.549808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.549974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.550000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.550176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.550497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.550551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.550746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.550978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.551006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.669 qpair failed and we were unable to recover it. 00:25:16.669 [2024-05-15 04:26:04.551228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.669 [2024-05-15 04:26:04.551423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.551448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.551647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.551898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.551926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.552113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.552355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.552383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.552605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.552824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.552853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.553063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.553279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.553307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.553522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.553976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.554004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.554199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.554393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.554422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.554652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.554870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.554895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.555071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.555286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.555318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.555570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.555786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.555813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.556054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.556218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.556243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.556474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.556785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.556833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.557086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.557314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.557344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.557570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.557775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.557804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.558005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.558189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.558218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.558417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.558611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.558636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.558805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.559022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.559050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.559270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.559453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.559480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.559693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.559891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.559916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.560126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.560317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.560345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.560537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.560728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.560755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.560970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.561161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.561189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.561479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.561667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.561691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.561914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.562135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.562163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.562358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.562560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.562584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.562806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.563098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.563125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.563415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.563605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.563632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.563828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.564000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.564025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.564222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.564487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.564512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.564742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.564937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.670 [2024-05-15 04:26:04.564966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.670 qpair failed and we were unable to recover it. 00:25:16.670 [2024-05-15 04:26:04.565186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.565353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.565377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.565541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.565731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.565756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.565975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.566163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.566190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.566414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.566687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.566711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.566963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.567166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.567191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.567448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.567628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.567655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.567870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.568085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.568112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.568306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.568554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.568582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.568762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.568978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.569007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.569204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.569425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.569450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.569645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.569853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.569877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.570108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.570388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.570437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.570649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.570896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.570923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.571154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.571408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.571432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.571628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.571816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.571845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.572072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.572319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.572367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.572609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.572780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.572805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.573094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.573341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.573365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.573593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.573807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.573834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.574057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.574249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.574274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.574555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.574872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.574926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.575132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.575315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.575341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.575548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.575776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.575822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.576044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.576259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.576287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.576509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.576700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.576726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.576927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.577120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.577147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.577375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.577659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.577706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.577925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.578175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.578202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.578416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.578627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.578654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.578841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.579060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.671 [2024-05-15 04:26:04.579091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.671 qpair failed and we were unable to recover it. 00:25:16.671 [2024-05-15 04:26:04.579314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.579532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.579560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.579803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.580050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.580079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.580287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.580478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.580506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.580702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.580909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.580942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.581183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.581369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.581414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.581635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.581924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.581965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.582186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.582365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.582393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.582634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.582855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.582880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.583102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.583318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.583345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.583571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.583756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.583780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.583983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.584237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.584262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.584519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.584803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.584851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.585067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.585297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.585321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.585563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.585750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.585778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.586000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.586191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.586216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.586411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.586597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.586624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.586853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.587096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.587125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.587378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.587627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.587655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.587912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.588141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.588169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.588386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.588608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.588658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.588877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.589074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.589099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.589295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.589643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.589699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.589913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.590105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.590133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.590380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.590599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.590624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.590812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.591038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.591066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.591314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.591473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.591498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.591692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.591906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.591943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.592168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.592359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.592383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.592640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.592885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.592913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.593105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.593385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.672 [2024-05-15 04:26:04.593430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.672 qpair failed and we were unable to recover it. 00:25:16.672 [2024-05-15 04:26:04.593652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.593868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.593896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.594140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.594391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.594419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.594632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.594805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.594829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.595034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.595254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.595279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.595473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.595668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.595697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.595915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.596143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.596169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.596346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.596510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.596535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.596746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.597002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.597027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.597258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.597448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.597475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.597685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.597939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.597970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.598187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.598441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.598468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.598650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.598830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.598858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.599053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.599315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.599361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.599612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.599778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.599803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.600024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.600281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.600305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.600505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.600772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.600799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.600986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.601209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.601236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.601430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.601688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.601734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.601927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.602134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.602159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.602382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.602629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.602653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.602878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.603069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.603103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.603332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.603599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.603649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.603867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.604109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.604138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.604358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.604573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.604601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.604792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.605010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.673 [2024-05-15 04:26:04.605038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.673 qpair failed and we were unable to recover it. 00:25:16.673 [2024-05-15 04:26:04.605252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.605577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.605628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.605879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.606094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.606123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.606324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.606517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.606543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.606758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.607011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.607039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.607255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.607471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.607499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.607709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.607928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.607961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.608193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.608379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.608404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.608617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.608829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.608858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.609081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.609360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.609408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.609630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.609820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.609848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.610068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.610284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.610311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.610527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.610794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.610839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.611059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.611299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.611345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.611567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.611921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.612008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.612197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.612398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.612422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.612617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.612848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.612873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.613045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.613237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.613265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.613450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.613660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.613688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.613899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.614132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.614160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.614342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.614580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.614605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.614765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.614957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.614989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.615215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.615437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.615465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.615651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.615837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.615864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.616053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.616254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.616278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.616461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.616691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.616718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.616944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.617168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.617204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.617376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.617545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.617569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.617738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.617956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.617991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.618187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.618377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.618405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.618603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.618796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.674 [2024-05-15 04:26:04.618820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.674 qpair failed and we were unable to recover it. 00:25:16.674 [2024-05-15 04:26:04.619038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.619267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.619292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.619482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.619720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.619767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.619959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.620130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.620172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.620415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.620613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.620642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.620839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.621055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.621083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.621285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.621451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.621477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.621681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.621898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.621926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.622121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.622387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.622434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.622637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.622816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.622841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.623023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.623222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.623250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.623465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.623681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.623726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.623960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.624164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.624192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.624414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.624640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.624664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.624885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.625135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.625163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.625377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.625617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.625665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.625873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.626104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.626132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.626363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.626660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.626692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.626889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.627123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.627152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.627381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.627566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.627593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.627813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.628028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.628059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.628252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.628496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.628523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.628755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.628981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.629012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.629274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.629464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.629492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.629713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.629907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.629937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.630138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.630351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.630378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.630593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.630815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.630866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.631074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.631309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.631357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.631549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.631736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.631764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.631961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.632182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.632207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.632399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.632603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.675 [2024-05-15 04:26:04.632634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.675 qpair failed and we were unable to recover it. 00:25:16.675 [2024-05-15 04:26:04.632822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.633043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.633068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.633239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.633503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.633558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.633767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.633960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.633989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.634178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.634385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.634414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.634633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.634822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.634850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.635070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.635237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.635261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.635441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.635736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.635785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.635985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.636208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.636233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.636424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.636644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.636693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.636881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.637082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.637112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.637331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.637521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.637549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.637758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.637952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.637978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.638178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.638372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.638400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.638650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.638849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.638873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.639072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.639241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.639266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.639514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.639697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.639722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.639942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.640134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.640162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.640375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.640628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.640676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.640898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.641124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.641152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.641353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.641543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.641574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.641763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.641956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.641993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.642193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.642389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.642446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.642666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.642881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.642909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.643141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.643355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.643403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.643647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.643866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.643891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.644097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.644282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.644311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.644552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.644854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.644906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.645109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.645339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.645363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.645593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.645833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.645861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.646097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.646322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.676 [2024-05-15 04:26:04.646349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.676 qpair failed and we were unable to recover it. 00:25:16.676 [2024-05-15 04:26:04.646568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.646734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.646759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.646980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.647226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.647255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.647433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.647633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.647659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.647865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.648091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.648122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.648341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.648570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.648595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.648765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.648988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.649013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.649227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.649419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.649447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.649633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.649847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.649880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.650082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.650287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.650319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.650507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.650729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.650757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.650945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.651160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.651184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.651375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.651562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.651590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.651779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.651976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.652012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.652267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.652464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.652493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.677 [2024-05-15 04:26:04.652723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.652893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.677 [2024-05-15 04:26:04.652918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.677 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.653152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.653341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.653370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.653597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.653764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.653790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.654002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.654206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.654236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.654407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.654574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.654599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.654811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.655035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.655061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.655226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.655481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.655530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.655730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.655992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.656018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.656204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.656458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.656484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.656694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.656950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.656976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.657157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.657398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.657451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.657712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.657908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.657943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.658112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.658283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.658309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.658535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.658832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.658882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.659145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.659365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.659395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.659622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.659872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.659900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.660143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.660340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.660367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.660555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.660773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.660807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.661046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.661249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.661278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.958 [2024-05-15 04:26:04.661511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.661697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.958 [2024-05-15 04:26:04.661727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.958 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.661949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.662120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.662145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.662324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.662555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.662608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.662833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.663032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.663062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.663254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.663589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.663640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.663898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.664108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.664139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.664365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.664544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.664570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.664771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.664987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.665016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.665228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.665425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.665454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.665660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.665841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.665867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.666037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.666221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.666281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.666510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.666728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.666757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.666978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.667199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.667228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.667459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.667721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.667770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.667992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.668190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.668215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.668411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.668608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.668636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.668830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.669046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.669076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.669277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.669446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.669471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.669676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.669880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.669910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.670141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.670339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.670367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.670559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.670804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.670831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.671030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.671226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.671251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.671422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.671642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.671667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.671874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.672064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.672093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.672283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.672518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.672565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.672741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.672969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.672995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.673168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.673359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.673384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.673579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.673871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.673900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.674104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.674306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.674355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.674577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.674741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.674766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.959 qpair failed and we were unable to recover it. 00:25:16.959 [2024-05-15 04:26:04.674936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.959 [2024-05-15 04:26:04.675162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.675192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.675420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.675646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.675694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.675911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.676119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.676147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.676335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.676562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.676589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.676814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.677047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.677075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.677282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.677596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.677652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.677838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.678052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.678080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.678266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.678448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.678475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.678697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.678878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.678909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.679118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.679344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.679383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.679653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.679867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.679891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.680094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.680283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.680337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.680557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.680940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.680998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.681245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.681438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.681463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.681663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.681890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.681917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.682108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.682298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.682326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.682518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.682738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.682763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.682961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.683131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.683156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.683391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.683717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.683770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.684016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.684212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.684239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.684456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.684624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.684648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.684822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.685015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.685043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.685256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.685495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.685523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.685746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.685916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.685946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.686126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.686327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.686354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.686566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.686779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.686806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.686998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.687190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.687218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.687437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.687700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.687764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.687973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.688185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.688213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.960 [2024-05-15 04:26:04.688400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.688619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.960 [2024-05-15 04:26:04.688647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.960 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.688861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.689051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.689081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.689309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.689508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.689563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.689782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.690047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.690075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.690266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.690481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.690510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.690731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.690978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.691007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.691226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.691411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.691439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.691685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.691888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.691915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.692151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.692412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.692460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.692702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.692915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.692948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.693133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.693319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.693349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.693610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.693865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.693927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.694157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.694409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.694437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.694636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.694800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.694826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.694997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.695224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.695264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.695508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.695699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.695728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.695913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.696160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.696185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.696388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.696584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.696614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.696858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.697092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.697121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.697356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.697595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.697645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.697867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.698063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.698088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.698285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.698473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.698527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.698726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.698969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.698998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.699250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.699438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.699462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.699660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.699889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.699917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.700121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.700346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.700371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.700562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.700803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.700831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.701021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.701221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.701250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.701475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.701701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.701728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.701927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.702178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.702206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.961 qpair failed and we were unable to recover it. 00:25:16.961 [2024-05-15 04:26:04.702427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.702643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.961 [2024-05-15 04:26:04.702689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.702878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.703125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.703154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.703365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.703727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.703776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.703999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.704184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.704211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.704396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.704610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.704637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.704818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.705040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.705066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.705239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.705403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.705428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.705621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.705799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.705824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.706032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.706251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.706279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.706495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.706693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.706717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.706905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.707110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.707139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.707333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.707523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.707550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.707768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.707948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.707977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.708172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.708324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.708349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.708568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.708783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.708813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.709025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.709222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.709250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.709472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.709685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.709713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.709958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.710156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.710183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.710382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.710603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.710634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.710859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.711024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.711068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.711292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.711458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.711501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.711709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.711918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.711953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.712210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.712495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.712543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.962 qpair failed and we were unable to recover it. 00:25:16.962 [2024-05-15 04:26:04.712785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.962 [2024-05-15 04:26:04.713053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.713082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.713274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.713493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.713521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.713764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.713981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.714006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.714212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.714498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.714545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.714762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.714988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.715017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.715237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.715487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.715512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.715737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.715961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.715988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.716156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.716329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.716354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.716520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.716688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.716715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.716882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.717051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.717077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.717277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.717502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.717554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.717774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.717975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.718003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.718215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.718472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.718521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.718739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.718972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.719001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.719182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.719371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.719398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.719620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.719872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.719900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.720114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.720333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.720366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.720583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.720897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.720957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.721155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.721429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.721456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.721673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.721916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.721963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.722159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.722418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.722464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.722687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.722857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.722881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.723078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.723281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.723306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.723526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.723720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.723748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.723966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.724205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.724230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.724429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.724639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.724692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.724902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.725147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.725175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.725372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.725552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.725580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.725783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.725963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.725988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.963 [2024-05-15 04:26:04.726185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.726347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.963 [2024-05-15 04:26:04.726371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.963 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.726532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.726723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.726755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.726955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.727133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.727158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.727381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.727591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.727622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.727846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.728057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.728085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.728273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.728516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.728544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.728756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.728946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.728980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.729233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.729471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.729511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.729730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.729944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.729972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.730158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.730383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.730407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.730656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.730868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.730896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.731126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.731344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.731371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.731593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.731888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.731946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.732164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.732377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.732404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.732631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.732875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.732902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.733093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.733322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.733346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.733565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.733977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.734006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.734265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.734481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.734508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.734734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.734953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.734981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.735164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.735374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.735420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.735635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.735855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.735884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.736107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.736395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.736442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.736662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.736876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.736904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.737111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.737327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.737354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.737537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.737746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.737775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.738002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.738192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.738219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.738409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.738624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.738652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.738883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.739056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.739082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.739302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.739617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.739644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.739826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.740049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.964 [2024-05-15 04:26:04.740076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.964 qpair failed and we were unable to recover it. 00:25:16.964 [2024-05-15 04:26:04.740285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.740542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.740588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.740828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.741030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.741056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.741229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.741473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.741501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.741711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.741881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.741907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.742084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.742281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.742306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.742497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.742716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.742745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.742987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.743195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.743223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.743474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.743719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.743747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.743992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.744179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.744203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.744449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.744656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.744683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.744927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.745146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.745173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.745410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.745608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.745634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.745856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.746040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.746065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.746293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.746547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.746572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.746738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.746958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.746986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.747204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.747412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.747438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.747650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.747859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.747886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.748111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.748351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.748399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.748621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.748866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.748893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.749146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.749375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.749424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.749641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.749860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.749887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.750110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.750361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.750385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.750569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.750762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.750789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.751007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.751222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.751250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.751461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.751674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.751701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.751911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.752122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.752147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.752346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.752585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.752612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.752825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.753063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.753095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.753311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.753592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.753642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.965 [2024-05-15 04:26:04.753857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.754083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.965 [2024-05-15 04:26:04.754111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.965 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.754325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.754525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.754549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.754788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.755089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.755114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.755314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.755482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.755506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.755748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.755943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.755971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.756158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.756372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.756400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.756629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.756805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.756829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.757076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.757347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.757398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.757654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.757899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.757924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.758140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.758355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.758382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.758625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.758787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.758811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.759027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.759284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.759309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.759506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.759724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.759751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.759953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.760153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.760180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.760420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.760727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.760755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.760979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.761213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.761240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.761476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.761669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.761694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.761857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.762057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.762082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.762256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.762478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.762502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.762734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.762951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.762980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.763223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.763544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.763606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.763825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.764045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.764073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.764280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.764495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.764522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.764743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.764905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.764936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.765134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.765316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.765341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.765584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.765887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.765941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.766154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.766416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.766440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.766628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.766846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.766871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.767071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.767298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.767325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.767580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.767795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.966 [2024-05-15 04:26:04.767819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.966 qpair failed and we were unable to recover it. 00:25:16.966 [2024-05-15 04:26:04.768016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.768234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.768261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.768498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.768721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.768746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.768975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.769231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.769259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.769483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.769737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.769776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.770036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.770300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.770324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.770558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.770740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.770769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.771032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.771267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.771291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.771525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.771740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.771767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.772019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.772243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.772271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.772517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.772896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.772966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.773188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.773432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.773471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.773702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.773971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.774005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.774221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.774447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.774471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.774730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.774984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.775012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.775262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.775536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.775563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.775780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.775999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.776026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.776235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.776477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.776504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.776715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.776939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.776964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.777132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.777304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.777328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.777561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.777779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.777807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.778081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.778300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.778327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.778535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.778759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.778786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.779027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.779393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.779456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.779698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.779909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.779942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.967 qpair failed and we were unable to recover it. 00:25:16.967 [2024-05-15 04:26:04.780167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.967 [2024-05-15 04:26:04.780391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.780418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.780618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.780836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.780859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.781097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.781356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.781410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.781693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.781895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.781924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.782160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.782558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.782619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.782856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.783069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.783093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.783325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.783639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.783686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.783911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.784084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.784109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.784354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.784639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.784666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.784879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.785073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.785101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.785298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.785540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.785585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.785826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.786060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.786084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.786286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.786531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.786571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.786805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.787029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.787058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.787273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.787608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.787631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.787841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.788131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.788156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.788396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.788643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.788682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.788874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.789065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.789090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.789263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.789450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.789495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.789732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.789958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.789988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.790205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.790435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.790459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.790693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.790956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.790987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.791232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.791506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.791533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.791750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.792009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.792033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.792211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.792425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.792451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.792685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.792905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.792943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.793164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.793524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.793566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.793828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.794066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.794094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.794315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.794533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.794560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.794755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.794971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.968 [2024-05-15 04:26:04.794999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.968 qpair failed and we were unable to recover it. 00:25:16.968 [2024-05-15 04:26:04.795301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.795620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.795670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.795926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.796119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.796144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.796356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.796645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.796669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.796854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.797092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.797122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.797352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.797676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.797731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.797950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.798179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.798206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.798443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.798702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.798742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.798924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.799097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.799122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.799336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.799620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.799666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.799914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.800115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.800142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.800361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.800625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.800650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.801054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.801250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.801279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.801516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.801746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.801785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.802010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.802377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.802432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.802650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.802885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.802912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.803179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.803361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.803384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.803594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.803894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.803963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.804216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.804467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.804505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.804734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.804918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.804954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.805158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.805413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.805458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.805674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.805908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.805941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.806161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.806542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.806592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.806832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.807071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.807099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.807310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.807525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.807552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.807818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.808054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.808078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.808311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.808524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.808549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.808781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.809029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.809057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.809245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.809465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.809489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.809684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.809897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.809942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.969 qpair failed and we were unable to recover it. 00:25:16.969 [2024-05-15 04:26:04.810201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.969 [2024-05-15 04:26:04.810389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.810416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.810607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.810860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.810887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.811090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.811332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.811359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.811539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.811796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.811819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.812052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.812239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.812263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.812498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.812807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.812831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.813087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.813303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.813327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.813575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.813795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.813823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.814060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.814237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.814261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.814504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.814719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.814746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.814960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.815174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.815197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.815408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.815761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.815824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.816008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.816299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.816348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.816574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.816748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.816772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.817036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.817414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.817464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.817685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.817919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.817949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.818147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.818370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.818397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.818638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.818832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.818859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.819107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.819501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.819554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.819815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.820035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.820063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.820312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.820534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.820561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.820784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.821055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.821083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.821316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.821526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.821551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.821812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.822144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.822214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.822431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.822759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.822816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.823079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.823301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.823327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.823547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.823796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.823852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.824071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.824282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.824309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.970 qpair failed and we were unable to recover it. 00:25:16.970 [2024-05-15 04:26:04.824544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.970 [2024-05-15 04:26:04.824957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.825013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.825237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.825414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.825439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.825727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.825952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.825977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.826195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.826411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.826438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.826693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.826903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.826927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.827098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.827281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.827305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.827524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.827776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.827821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.828037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.828291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.828339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.828590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.828844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.828871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.829075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.829304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.829331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.829574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.829854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.829912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.830137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.830353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.830380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.830617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.830787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.830811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.831087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.831304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.831328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.831546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.831767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.831794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.832033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.832422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.832473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.832715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.832955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.832983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.833222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.833444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.833468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.833692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.834052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.834079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.834296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.834634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.834696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.834940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.835184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.835230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.835429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.835712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.835741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.835988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.836185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.836213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.836422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.836678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.836705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.836959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.837208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.837236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.837443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.837664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.837689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.971 qpair failed and we were unable to recover it. 00:25:16.971 [2024-05-15 04:26:04.837882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.838100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.971 [2024-05-15 04:26:04.838128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.838356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.838630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.838658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.838878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.839096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.839124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.839384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.839607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.839632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.839832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.840054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.840079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.840372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.840664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.840710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.840956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.841149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.841178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.841407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.841803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.841850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.842067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.842385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.842446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.842666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.842906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.842940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.843146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.843369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.843394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.843687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.843925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.843961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.844152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.844488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.844555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.844805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.845031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.845057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.845279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.845499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.845527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.845723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.845952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.845976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.846174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.846365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.846394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.846611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.846860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.846887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.847120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.847348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.847377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.847584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.847796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.847823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.848044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.848291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.848331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.848552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.848911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.848967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.849243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.849488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.849515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.849732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.849943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.849971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.972 qpair failed and we were unable to recover it. 00:25:16.972 [2024-05-15 04:26:04.850189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.972 [2024-05-15 04:26:04.850405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.850429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.850657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.850847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.850873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.851135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.851366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.851390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.851591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.851832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.851859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.852075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.852283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.852306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.852519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.852698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.852723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.852945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.853158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.853185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.853413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.853756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.853814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.854040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.854301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.854328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.854534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.854879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.854939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.855123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.855349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.855377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.855593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.855848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.855886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.856122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.856344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.856367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.856583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.856801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.856824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.857034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.857316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.857340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.857622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.857797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.857821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.857995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.858311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.858364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.858588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.858837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.858883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.859087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.859324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.859348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.859565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.859832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.859856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.860099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.860318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.860343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.860568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.860913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.860987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.861206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.861425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.861452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.861671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.861908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.861938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.862122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.862329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.862353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.862578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.862819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.862843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.863081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.863388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.863416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.863669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.863907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.863941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.973 qpair failed and we were unable to recover it. 00:25:16.973 [2024-05-15 04:26:04.864163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.864406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.973 [2024-05-15 04:26:04.864433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.864622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.864866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.864905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.865143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.865367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.865396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.865642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.865823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.865847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.866056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.866276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.866303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.866508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.866721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.866744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.866910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.867137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.867165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.867419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.867667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.867694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.867902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.868122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.868150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.868346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.868608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.868632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.868837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.869040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.869066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.869287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.869499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.869528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.869771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.870033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.870058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.870227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.870441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.870468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.870688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.870949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.870974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.871205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.871446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.871473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.871691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.871911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.871944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.872191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.872456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.872499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.872719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.873006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.873058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.873308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.873546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.873570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.873761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.874061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.874089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.874293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.874567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.874593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.874810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.875028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.875053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.875276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.875486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.875510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.875773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.876013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.876041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.876254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.876441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.876468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.974 [2024-05-15 04:26:04.876682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.876896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.974 [2024-05-15 04:26:04.876925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.974 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.877129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.877564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.877619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.877830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.878066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.878091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.878313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.878508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.878532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.878798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.879022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.879048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.879290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.879504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.879528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.879761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.880013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.880038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.880275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.880520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.880568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.880763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.881019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.881047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.881303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.881644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.881671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.881894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.882078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.882103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.882323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.882511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.882538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.882755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.882997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.883025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.883209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.883425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.883452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.883697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.883873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.883897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.884119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.884318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.884346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.884528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.884752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.884780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.884965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.885187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.885214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.885430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.885823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.885884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.886131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.886349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.886376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.886569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.886908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.886973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.887198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.887414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.887443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.887672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.887905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.887941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.888155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.888366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.888394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.888695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.888982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.889010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.889277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.889486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.889513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.889718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.889962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.889987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.890207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.890414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.890442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.975 qpair failed and we were unable to recover it. 00:25:16.975 [2024-05-15 04:26:04.890651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.975 [2024-05-15 04:26:04.890908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.890950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.891175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.891431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.891455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.891687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.891936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.891964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.892190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.892542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.892603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.892825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.893063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.893091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.893315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.893554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.893581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.893824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.894048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.894077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.894290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.894672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.894700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.894943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.895140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.895166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.895336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.895512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.895538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.895741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.895906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.895940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.896157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.896431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.896480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.896709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.896878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.896904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.897109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.897387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.897435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.897688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.897876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.897905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.898147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.898376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.898422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.898637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.898894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.898927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.899182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.899372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.899402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.899619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.899816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.899844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.900044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.900313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.900361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.900605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.900805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.900830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.901052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.901292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.901322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.976 [2024-05-15 04:26:04.901541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.901767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.976 [2024-05-15 04:26:04.901813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.976 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.902015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.902181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.902207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.902398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.902588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.902613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.902841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.903002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.903027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.903252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.903486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.903528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.903753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.903995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.904021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.904248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.904629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.904691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.904897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.905073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.905101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.905334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.905632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.905681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.905941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.906158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.906187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.906418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.906829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.906873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.907074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.907272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.907300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.907670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.907884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.907913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.908138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.908312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.908337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.908536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.908705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.908731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.908943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.909142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.909169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.909404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.909603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.909629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.909836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.910062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.910089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.910315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.910572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.910597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.910832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.911085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.911111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.911280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.911480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.911505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.911719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.911908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.911944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.912160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.912354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.912379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.912579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.912799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.912824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.912990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.913166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.913191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.913380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.913608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.913633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.913876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.914127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.914154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.914334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.914557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.977 [2024-05-15 04:26:04.914583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.977 qpair failed and we were unable to recover it. 00:25:16.977 [2024-05-15 04:26:04.914809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.915031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.915058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.915290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.915496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.915521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.915784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.915965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.915994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.916218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.916443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.916470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.916696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.916958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.916986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.917194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.917438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.917463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.917747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.917956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.917985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.918210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.918445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.918470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.918666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.918853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.918879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.919074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.919247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.919288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.919538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.919764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.919789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.919979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.920181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.920206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.920429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.920594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.920619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.920778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.920979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.921005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.921207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.921395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.921420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.921621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.921855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.921879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.922123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.922491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.922543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.922814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.923042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.923068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.923270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.923484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.923508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.923749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.923916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.923947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.924154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.924351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.924377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.924631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.924838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.924866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.925082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.925315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.925363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.925600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.925828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.925853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.926056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.926258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.926283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.926519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.926694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.926719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.926897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.927113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.978 [2024-05-15 04:26:04.927139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.978 qpair failed and we were unable to recover it. 00:25:16.978 [2024-05-15 04:26:04.927331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.927536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.927561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.927745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.928023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.928050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.928292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.928515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.928540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.928709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.928907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.928937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.929138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.929337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.929366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.929542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.929743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.929768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.929968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.930159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.930185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.930349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.930574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.930600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.930791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.930961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.930988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.931180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.931355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.931381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.931600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.931792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.931817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.932022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.932215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.932241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.932439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.932607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.932632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.932795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.933020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.933046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.933236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.933427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.933457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.933656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.933847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.933872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.934069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.934241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.934268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.934504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.934725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.934750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.934977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.935291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.935346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.935577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.935770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.935796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.935960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.936155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.936181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.936376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.936574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.936599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.936816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.936983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.937011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.937211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.937436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.937462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.937686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.937902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.937942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.938180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.938378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.979 [2024-05-15 04:26:04.938403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.979 qpair failed and we were unable to recover it. 00:25:16.979 [2024-05-15 04:26:04.938579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.938801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.938827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.939053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.939409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.939459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.939682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.939925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.939962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.940193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.940423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.940448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.940642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.940829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.940854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.941049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.941238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.941263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.941470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.941748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.941773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.941970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.942214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.942238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.942417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.942586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.942632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.942850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.943052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.943078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.943273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.943479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.943504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.943688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.943884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.943911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.944111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.944294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.944318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.944529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.944723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.944750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.944912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.945100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.945125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.945391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.945604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.945628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.945867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.946094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.946120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.946340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.946590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.946639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.946861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.947070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.947097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.947320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.947521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.947547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.947774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.947943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.947970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.948141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.948326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.948350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.948559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.948744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.948770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.948969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.949172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.949198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.949399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.949570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.949596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.949764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.949956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.949984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.950207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.950379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.950405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.950591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.950768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.950793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.950965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.951171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.951196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.980 qpair failed and we were unable to recover it. 00:25:16.980 [2024-05-15 04:26:04.951362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.951543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.980 [2024-05-15 04:26:04.951567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.951801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.951983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.952008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.952208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.952382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.952407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.952568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.952740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.952767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.952971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.953167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.953192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.953365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.953533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.953559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.953786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.953959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.953986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.954212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.954410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.954437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.954608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.954809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.954835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.955034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.955205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.955231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.955458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.955623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.955648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.955816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.956020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.956046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.956239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.956439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.956465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.956675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.956870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.956895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.957133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.957331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.957358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.957554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.957723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.957748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.957952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.958117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.958142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.958355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.958556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.958581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.958801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.959025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.959051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.959218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.959417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.959442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.959649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.959852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.959877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.960072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.960246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.960271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.960494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.960658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.960683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:16.981 [2024-05-15 04:26:04.960880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.961079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.981 [2024-05-15 04:26:04.961104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:16.981 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.961305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.961479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.961504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.961678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.961869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.961894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.962067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.962244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.962270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.962436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.962631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.962656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.962858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.963035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.963062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.963261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.963430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.963457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.963664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.963836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.963861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.964084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.964280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.964306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.964527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.964705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.964731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.964953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.965128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.965154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.965356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.965527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.965554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.965750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.965917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.965949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.966119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.966293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.966319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.253 [2024-05-15 04:26:04.966487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.966656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.253 [2024-05-15 04:26:04.966681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.253 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.966861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.967041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.967068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.967242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.967431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.967457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.967629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.967893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.967919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.968155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.968348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.968373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.968574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.968746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.968773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.968958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.969181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.969207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.969429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.969622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.969647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.969815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.970019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.970045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.970220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.970419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.970444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.970616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.970789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.970816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.970995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.971194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.971234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.971422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.971583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.971608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.971786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.971979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.972005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.972170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.972369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.972394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.972561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.972756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.972782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.972982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.973150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.973176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.973375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.973547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.973574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.973749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.973918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.973950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.974117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.974312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.974337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.974532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.974725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.974751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.974944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.975115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.975142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.975362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.975528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.975554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.975729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.975902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.975928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.976135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.976310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.976336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.976508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.976680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.976705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.976946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.977121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.977148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.977348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.977519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.977545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.977748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.977946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.977972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.978197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.978368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.978395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.254 qpair failed and we were unable to recover it. 00:25:17.254 [2024-05-15 04:26:04.978631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.254 [2024-05-15 04:26:04.978798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.978824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.979009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.979182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.979209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.979403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.979615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.979640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.979856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.980067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.980093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.980261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.980434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.980459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.980683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.980879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.980906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.981136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.981332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.981358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.981529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.981731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.981757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.981964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.982136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.982163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.982425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.982640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.982669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.982882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.983088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.983117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.983333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.983599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.983645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.983846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.984053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.984080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.984254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.984487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.984513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.984747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.984946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.984972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.985144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.985342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.985367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.985561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.985781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.985806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.986026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.986248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.986276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.986496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.986663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.986688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.986861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.987026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.987052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.987248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.987451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.987476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.987680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.987851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.987876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.988050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.988230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.988255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.988429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.988612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.988638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.988834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.989055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.989081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.989277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.989463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.989493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.989703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.989921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.989957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.990147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.990362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.990387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.990584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.990794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.990819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.255 [2024-05-15 04:26:04.991064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.991399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.255 [2024-05-15 04:26:04.991450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.255 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.991648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.991866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.991891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.992069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.992260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.992285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.992478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.992679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.992706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.992939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.993136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.993161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.993391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.993562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.993587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.993807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.993996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.994022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.994246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.994443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.994468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.994644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.994868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.994893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.995134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.995354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.995401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.995614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.995813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.995838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.996036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.996205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.996231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.996457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.996654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.996679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.996847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.997074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.997100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.997295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.997540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.997573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.997788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.997984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.998011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.998213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.998400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.998426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.998647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.998813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.998840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.999039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.999235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.999261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.999461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.999630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:04.999655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:04.999855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.000020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.000047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.000225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.000395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.000421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.000620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.000783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.000808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.001008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.001177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.001203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.001401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.001622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.001652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.001824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.001994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.002020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.002198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.002429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.002454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.002718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.002908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.002950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.003151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.003431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.003456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.256 [2024-05-15 04:26:05.003649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.003859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.256 [2024-05-15 04:26:05.003888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.256 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.004088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.004300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.004325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.004551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.004728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.004754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.004950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.005116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.005141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.005347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.005507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.005532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.005763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.005963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.005994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.006194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.006390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.006415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.006598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.006771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.006796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.006973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.007198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.007224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.007431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.007626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.007651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.007827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.007999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.008025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.008220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.008439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.008464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.008694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.008895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.008922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.009122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.009319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.009344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.009540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.009732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.009757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.009965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.010132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.010159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.010363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.010559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.010584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.010808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.011003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.011039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.011234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.011402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.011428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.011624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.011795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.011822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.012024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.012255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.012281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.012454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.012651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.012676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.012843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.013016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.013043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.013268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.013467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.013492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.013689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.013882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.013908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.014116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.014290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.014317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.257 qpair failed and we were unable to recover it. 00:25:17.257 [2024-05-15 04:26:05.014547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.257 [2024-05-15 04:26:05.014729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.014754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.014967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.015168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.015194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.015357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.015559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.015585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.015750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.015915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.015947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.016138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.016295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.016335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.016553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.016743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.016768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.016971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.017135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.017160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.017388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.017577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.017603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.017781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.017959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.017986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.018159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.018358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.018384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.018580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.018754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.018779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.018954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.019125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.019151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.019317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.019485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.019512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.019672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.019844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.019869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.020045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.020239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.020265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.020453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.020652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.020677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.020889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.021088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.021114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.021307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.021502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.021529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.021768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.021991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.022018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.022242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.022434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.022464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.022670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.022866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.022891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.023098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.023276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.023301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.023503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.023701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.023726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.023950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.024117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.024142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.024337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.024537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.024563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.024731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.024927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.024970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.025175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.025349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.025376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.025571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.025743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.025768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.258 [2024-05-15 04:26:05.025941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.026107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.258 [2024-05-15 04:26:05.026133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.258 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.026300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.026497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.026522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.026693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.026914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.026946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.027192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.027456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.027501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.027690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.027904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.027941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.028131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.028324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.028350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.028539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.028716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.028741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.028969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.029166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.029190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.029387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.029557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.029583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.029808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.029997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.030024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.030226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.030426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.030451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.030619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.030815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.030842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.031072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.031237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.031263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.031456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.031680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.031705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.031903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.032072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.032098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.032268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.032463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.032488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.032653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.032876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.032901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.033094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.033264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.033289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.033466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.033638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.033663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.033867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.034059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.034085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.034250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.034496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.034522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.034720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.034911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.034946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.035174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.035395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.035420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.035616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.035785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.035810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.036009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.036178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.036206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.036403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.036575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.036599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.036765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.036934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.036960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.037135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.037322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.037347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.037528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.037706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.037746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.037949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.038177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.038202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.259 qpair failed and we were unable to recover it. 00:25:17.259 [2024-05-15 04:26:05.038426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.259 [2024-05-15 04:26:05.038664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.038691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.038872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.039055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.039083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.039266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.039459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.039485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.039652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.039842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.039867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.040044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.040241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.040266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.040465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.040663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.040689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.040884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.041125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.041151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.041361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.041584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.041609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.041830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.042024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.042050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.042252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.042422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.042449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.042650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.042844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.042869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.043050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.043227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.043253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.043452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.043680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.043705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.043911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.044092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.044120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.044327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.044493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.044520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.044749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.044956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.044982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.045148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.045346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.045372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.045535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.045736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.045762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.045991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.046190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.046215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.046390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.046582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.046608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.046780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.046982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.047009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.047211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.047403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.047428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.047624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.047796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.047822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.048019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.048218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.048244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.048422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.048643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.048669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.048839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.049006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.049032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.049250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.049442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.049467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.049669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.049893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.049918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.050102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.050298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.050324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.050522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.050745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.050770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.260 qpair failed and we were unable to recover it. 00:25:17.260 [2024-05-15 04:26:05.050954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.260 [2024-05-15 04:26:05.051157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.051184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.051378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.051563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.051590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.051783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.051984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.052011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.052212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.052380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.052405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.052576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.052746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.052773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.052955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.053122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.053147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.053355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.053521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.053547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.053772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.053969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.053995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.054193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.054365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.054391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.054565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.054743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.054768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.054949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.055170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.055195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.055388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.055563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.055590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.055802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.056003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.056029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.056201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.056372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.056398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.056572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.056767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.056792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.056988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.057184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.057209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.057383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.057589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.057615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.057792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.057966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.057992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.058164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.058332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.058357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.058570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.058781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.058807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.059026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.059196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.059221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.059396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.059595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.059621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.059795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.059990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.060020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.060245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.060435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.060460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.060635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.060810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.060836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.061017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.061197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.061223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.061400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.061599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.061624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.061799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.061968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.061994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.062169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.062370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.062395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.062593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.062795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.261 [2024-05-15 04:26:05.062820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.261 qpair failed and we were unable to recover it. 00:25:17.261 [2024-05-15 04:26:05.062990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.063170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.063195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.063369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.063577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.063602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.063800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.063999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.064030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.064200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.064399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.064424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.064625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.064836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.064861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.065073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.065249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.065275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.065502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.065702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.065727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.065928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.066098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.066125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.066350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.066510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.066535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.066733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.066938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.066964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.067131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.067351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.067377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.067574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.067798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.067824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.068017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.068216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.068245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.068444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.068629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.068655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.068876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.069077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.069103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.069271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.069435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.069462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.069663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.069836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.069878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.070097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.070269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.070295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.070472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.070643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.070669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.070865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.071056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.071082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.071277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.071476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.071501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.071695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.071865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.071891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.072111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.072277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.072307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.072486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.072679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.072705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.262 [2024-05-15 04:26:05.072879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.073077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.262 [2024-05-15 04:26:05.073104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.262 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.073286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.073452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.073478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.073958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.073985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.074184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.074348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.074374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.074594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.074787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.074813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.074980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.075150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.075175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.075361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.075557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.075582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.075782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.075984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.076010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.076208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.076403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.076429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.076602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.076803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.076829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.077030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.077229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.077254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.077450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.077646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.077672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.077869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.078042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.078069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.078279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.078477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.078502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.078728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.078928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.078960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.079157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.079368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.079394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.079615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.079817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.079843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.080012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.080186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.080213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.080444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.080644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.080669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.080895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.081072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.081098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.081272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.081443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.081468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.081668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.081864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.081889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.082100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.082302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.082327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.082500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.082693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.082718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.082949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.083118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.083143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.083342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.083515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.083540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.083740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.083951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.083977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.084156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.084353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.084378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.084576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.084773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.084799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.085029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.085251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.085277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.263 qpair failed and we were unable to recover it. 00:25:17.263 [2024-05-15 04:26:05.085475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-05-15 04:26:05.085671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.085696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.085860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.086032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.086058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.086220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.086389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.086415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.086641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.086829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.086854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.087017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.087216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.087243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.087472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.087636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.087661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.087860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.088040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.088067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.088281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.088445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.088470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.088640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.088805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.088845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.089034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.089233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.089259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.089463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.089652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.089677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.089868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.090041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.090068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.090270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.090448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.090473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.090673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.090875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.090900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.091108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.091308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.091333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.091526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.091728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.091752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.091926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.092134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.092160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.092332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.092531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.092557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.092721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.092901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.092926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.093142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.093338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.093364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.093534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.093729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.093755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.093958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.094155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.094180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.094382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.094578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.094604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.094801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.094994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.095021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.095237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.095396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.095421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.095631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.095859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.095884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.096050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.096276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.096301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.096469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.096642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.096667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.096868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.097059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.097085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.097284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.097483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.264 [2024-05-15 04:26:05.097510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.264 qpair failed and we were unable to recover it. 00:25:17.264 [2024-05-15 04:26:05.097708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.097880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.097905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.098083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.098277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.098302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.098487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.098687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.098714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.098914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.099117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.099143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.099317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.099525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.099550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.099747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.099951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.099978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.100152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.100381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.100407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.100625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.100824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.100850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.101051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.101223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.101250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.101455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.101630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.101656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.101861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.102055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.102081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.102285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.102461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.102486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.102687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.102859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.102886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.103092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.103288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.103313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.103504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.103704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.103730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.103907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.104118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.104145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.104307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.104504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.104530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.104708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.104917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.104950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.105128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.105324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.105350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.105549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.105753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.105779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.105982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.106147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.106172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.106372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.106548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.106573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.106734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.106961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.106987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.107157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.107448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.107489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.107715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.107911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.107942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.108111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.108316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.108342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.108539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.108714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.108739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.108906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.109112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.109138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.109348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.109580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.109605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.265 qpair failed and we were unable to recover it. 00:25:17.265 [2024-05-15 04:26:05.109778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.265 [2024-05-15 04:26:05.109991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.110018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.110188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.110434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.110460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.110631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.110878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.110904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.111098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.111275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.111301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.111478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.111644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.111669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.111859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.112047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.112073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.112271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.112469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.112495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.112660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.112828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.112854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.113024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.113224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.113250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.113444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.113629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.113655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.113847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.114018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.114045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.114240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.114440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.114465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.114665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.114898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.114924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.115178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.115375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.115401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.115573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.115766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.115792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.115963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.116165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.116190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.116422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.116589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.116614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.116824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.117003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.117029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.117227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.117415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.117441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.117654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.117824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.117849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.118029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.118231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.118258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.118426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.118591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.118616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.118784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.118965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.118999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.119220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.119390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.119415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.119580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.119787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.119813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.266 [2024-05-15 04:26:05.119976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.120158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.266 [2024-05-15 04:26:05.120185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.266 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.120364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.120567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.120593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.120762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.120937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.120963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.121153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.121353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.121379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.121550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.121749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.121775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.121981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.122179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.122205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.122370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.122532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.122558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.122752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.122946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.122972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.123137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.123298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.123324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.123526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.123753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.123779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.124004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.124212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.124237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.124422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.124617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.124643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.124812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.125017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.125044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.125242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.125438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.125463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.125658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.125855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.125881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.126084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.126258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.126288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.126514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.126677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.126704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.126935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.127145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.127172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.127345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.127538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.127566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.127770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.127965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.127991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.128196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.128398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.128425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.128625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.128795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.128821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.128989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.129190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.129215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.129410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.129630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.129655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.129843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.130055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.130081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.130302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.130483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.130512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.130683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.130841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.130866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.131106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.131303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.131328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.131526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.131698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.131727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.131951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.132120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.132146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.267 qpair failed and we were unable to recover it. 00:25:17.267 [2024-05-15 04:26:05.132323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.267 [2024-05-15 04:26:05.132519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.132545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.132717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.132944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.132970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.133149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.133350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.133376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.133572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.133765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.133790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.133991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.134156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.134182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.134397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.134574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.134603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.134805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.135003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.135029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.135218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.135417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.135442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.135636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.135805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.135830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.135992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.136188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.136213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.136381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.136576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.136600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.136798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.136998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.137024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.137225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.137420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.137446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.137612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.137846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.137872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.138087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.138272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.138297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.138485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.138704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.138734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.138980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.139159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.139186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.139411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.139607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.139632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.139829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.140001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.140027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.140198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.140391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.140416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.140584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.140746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.140772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.140981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.141150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.141176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.141377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.141594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.141621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.141843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.142046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.142072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.142249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.142448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.142474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.142675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.142881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.142908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.143152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.143387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.143412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.143637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.143813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.143838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.144070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.144242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.144270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.144492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.144686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.268 [2024-05-15 04:26:05.144711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.268 qpair failed and we were unable to recover it. 00:25:17.268 [2024-05-15 04:26:05.144893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.145066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.145092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.145269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.145495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.145520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.145690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.145886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.145912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.146106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.146303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.146329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.146514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.146735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.146761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.146939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.147170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.147196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.147425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.147664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.147690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.147894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.148117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.148143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.148403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.148735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.148761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.148945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.149147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.149174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.149359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.149532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.149572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.149816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.149995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.150022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.150219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.150420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.150446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.150647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.150879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.150904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.151095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.151291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.151316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.151524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.151750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.151776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.151980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.152175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.152201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.152457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.152635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.152662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.152867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.153037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.153065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.153291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.153471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.153496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.153694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.153971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.153997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.154195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.154370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.154411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.154626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.154821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.154846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.155047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.155259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.155285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.155466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.155704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.155727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.155971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.156150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.156176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.156405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.156622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.156647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.156860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.157039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.157067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.157268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.157454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.157479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.269 [2024-05-15 04:26:05.157669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.157923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.269 [2024-05-15 04:26:05.157955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.269 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.158161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.158336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.158360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.158592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.158786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.158811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.159015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.159228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.159253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.159478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.159664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.159689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.159888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.160125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.160151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.160357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.160524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.160550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.160786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.160981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.161008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.161172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.161387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.161412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.161644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.161859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.161885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.162061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.162263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.162289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.162483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.162657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.162682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.162885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.163095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.163123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.163304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.163502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.163528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.163702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.163872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.163899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.164105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.164290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.164330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.164711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.164941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.164967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.165166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.165366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.165392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.165617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.165818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.165844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.166036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.166211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.166238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.166497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.166689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.166715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.166889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.167101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.167127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.167327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.167517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.167544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.167763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.167974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.168000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.168179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.168441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.168466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.168666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.168868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.168894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.169102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.169298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.169323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.169514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.169730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.169758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.169949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.170157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.170184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.170383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.170620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.170648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.270 qpair failed and we were unable to recover it. 00:25:17.270 [2024-05-15 04:26:05.170875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.270 [2024-05-15 04:26:05.171112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.171139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.171318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.171572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.171614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.171836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.172038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.172066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.172262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.172445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.172471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.172772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.173046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.173075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.173277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.173511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.173537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.173766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.173961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.173988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.174172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.174373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.174399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.174568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.174782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.174808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.174990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.175222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.175249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.175451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.175687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.175729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.175940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.176142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.176170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.176381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.176580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.176607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.176810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.177042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.177069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.178944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.179162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.179190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.179404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.179603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.179631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.179827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.180032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.180059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.180265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.180471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.180498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.180666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.180890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.180936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.181142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.181351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.181378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.181585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.181860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.181888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.182127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.182331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.182358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.182561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.182736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.182762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.182993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.183171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.183199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.183381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.183609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.183636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.271 [2024-05-15 04:26:05.183861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.184081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.271 [2024-05-15 04:26:05.184109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.271 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.187943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.188246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.188277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.188539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.188751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.188778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.188982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.189213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.189262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.189475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.189688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.189715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.189997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.190181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.190223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.190436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.190655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.190683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.190912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.191108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.191135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.191339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.191548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.191575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.191876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.192177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.192212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.192506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.192816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.192866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.193077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.193310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.193341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.193639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.193965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.194000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.194221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.194502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.194552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.194828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.195141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.195176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.195454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.195706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.195753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.196051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.196304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.196336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.196550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.196844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.196896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.197107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.197307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.197356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.197609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.197871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.197904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.198142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.198372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.198420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.198680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.198947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.198989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.199248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.199607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.199673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.199905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.200104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.200140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.200430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.200689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.200736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.201010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.201202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.201241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.201488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.201678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.201711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.201985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.202168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.202202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.202496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.202739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.202772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.202962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.203181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.203215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.272 qpair failed and we were unable to recover it. 00:25:17.272 [2024-05-15 04:26:05.203442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.272 [2024-05-15 04:26:05.203707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.203739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.203955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.204153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.204187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.204415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.204603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.204643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.204874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.205140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.205177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.205401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.205623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.205657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.205854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.206042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.206077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.206316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.206526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.206571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.206786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.207004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.207039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.207268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.207478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.207510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.207706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.207949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.207992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.208184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.208447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.208479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.208698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.208970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.209015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.209198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.209429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.209469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.209716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.209957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.209993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.210249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.210492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.210525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.210753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.210974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.211008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.211219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.211466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.211497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.211717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.211977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.212014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.212234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.212466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.212499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.212755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.213007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.213043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.213246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.213563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.213597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.213823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.214046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.214084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.214325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.214594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.214635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.214892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.215100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.215138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.215356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.215602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.215635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.215860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.216049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.216082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.216330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.216580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.216613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.216900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.217152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.217186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.217406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.217657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.217705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.218009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.218245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.273 [2024-05-15 04:26:05.218277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.273 qpair failed and we were unable to recover it. 00:25:17.273 [2024-05-15 04:26:05.218527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.218781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.218811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.219060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.219294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.219328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.219585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.219833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.219868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.220153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.220389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.220423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.220653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.220905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.220956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.221178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.221451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.221484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.221729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.221954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.221999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.222291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.222513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.222545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.222810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.223032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.223066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.223260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.223480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.223512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.223727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.223982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.224017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.224204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.224444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.224490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.224687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.224892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.224946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.225142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.225376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.225408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.225657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.225941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.225981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.226186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.226412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.226453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.226682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.226896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.226936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.227157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.227370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.227403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.227640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.227860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.227893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.228158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.228439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.228469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.228714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.229010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.229053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.229364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.229643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.229675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.229937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.230152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.230186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.230449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.230669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.230703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.230946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.231170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.231202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.231439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.231647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.231682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.231888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.232142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.232175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.232375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.232593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.232625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.232851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.233107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.233154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.274 qpair failed and we were unable to recover it. 00:25:17.274 [2024-05-15 04:26:05.233382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.274 [2024-05-15 04:26:05.233627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.233672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.233891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.234111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.234147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.234398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.234668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.234701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.234908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.235153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.235187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.235457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.235668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.235701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.235958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.236151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.236182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.236403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.236627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.236660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.236908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.237165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.237199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.237416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.237645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.237678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.237906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.238131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.238165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.238450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.238698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.238729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.238948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.239158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.239193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.239394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.239643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.239679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.239915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.240122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.240169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.240415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.240638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.240671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.240903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.241138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.241176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.241375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.241601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.241651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.241959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.242264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.242301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.242487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.242719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.242751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.242970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.243185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.243220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.243448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.243664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.243695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.243949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.244139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.244172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.244402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.244609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.244642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.244888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.245082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.245114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.275 [2024-05-15 04:26:05.245334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.245559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.275 [2024-05-15 04:26:05.245592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.275 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.245847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.246068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.246101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.246327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.246560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.246593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.246788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.247032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.247066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.247256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.247484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.247517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.247769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.248000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.248035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.248218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.248421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.248453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.248707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.248921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.248960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.249151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.249351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.249384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.249597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.249830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.249863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.250080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.250284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.250316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.250499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.250715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.250746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.250982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.251176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.251208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.251423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.251635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.251670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.251892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.252123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.252157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.252354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.252598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.252632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.252879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.253071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.253109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.253303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.253492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.253524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.253744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.253959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.253993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.254184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.254408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.254441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.254674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.254921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.254963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.276 [2024-05-15 04:26:05.255150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.255389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.276 [2024-05-15 04:26:05.255434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.276 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.255638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.255851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.255884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.256123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.256347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.256381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.256601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.256785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.256818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.257038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.257223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.257256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.257448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.257634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.257668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.257859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.258046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.258081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.258314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.258554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.258588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.258816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.259034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.259067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.259259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.259493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.259525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.259710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.259960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.260004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.260218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.260520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.260570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.260793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.261005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.261039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.261284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.261482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.261530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.261738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.261946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.261987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.262163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.262365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.262396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.262597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.262813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.262846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.263064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.263286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.263319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.263542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.263770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.263804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.548 [2024-05-15 04:26:05.264009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.264217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.548 [2024-05-15 04:26:05.264248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.548 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.264458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.264685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.264717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.264954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.265183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.265229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.265477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.265702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.265736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.265984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.266182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.266217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.266461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.266743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.266775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.267027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.267252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.267287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.267517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.267752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.267787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.268012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.268204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.268245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.268453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.268719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.268756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.268984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.269229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.269275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.269457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.269677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.269711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.269940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.270181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.270215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.270539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.270816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.270849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.271064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.271322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.271354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.271600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.271796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.271830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.272068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.272301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.272333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.272578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.272798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.272844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.273119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.273357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.273391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.273599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.273841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.273874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.274124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.274377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.274410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.274690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.274937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.274971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.275208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.275428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.275474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.275704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.275911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.275969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.276186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.276513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.276546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.276772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.276957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.276992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.277216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.277429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.277475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.277842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.278115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.278149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.278433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.278735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.278772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.279070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.279287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.279336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.549 qpair failed and we were unable to recover it. 00:25:17.549 [2024-05-15 04:26:05.279560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.279801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.549 [2024-05-15 04:26:05.279838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.280056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.280317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.280352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.280580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.280794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.280828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.281042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.281267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.281299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.281535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.281741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.281772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.281991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.282233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.282284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.282549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.282772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.282806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.283033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.283246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.283279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.283524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.283740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.283773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.284013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.284231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.284262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.284468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.284676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.284715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.284970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.285212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.285262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.285499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.285754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.285803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.286055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.286294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.286327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.286581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.286798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.286830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.287098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.287338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.287371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.287698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.287923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.287965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.288172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.288395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.288427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.288674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.289124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.289157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.289406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.289620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.289655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.289919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.290169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.290208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.290452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.290690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.290738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.290951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.291175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.291223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.291422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.291681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.291713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.291938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.292165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.292200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.292448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.292680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.292712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.292951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.293173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.293207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.293433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.293843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.293876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.294080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.294317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.294355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.294614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.294817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.294856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.550 qpair failed and we were unable to recover it. 00:25:17.550 [2024-05-15 04:26:05.295089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.550 [2024-05-15 04:26:05.295339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.295379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.295621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.295871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.295904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.296110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.296318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.296352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.296565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.296783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.296817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.297064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.297298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.297331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.297557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.297837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.297869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.298095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.298333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.298365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.298593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.298802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.298834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.299041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.299278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.299326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.299624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.299903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.299946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.300171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.300416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.300450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.300682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.300957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.301000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.301252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.301491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.301527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.301752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.302001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.302035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.302275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.302478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.302510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.302799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.303026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.303060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.303284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.303496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.303529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.303771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.304007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.304057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.304340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.304584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.304616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.304864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.305082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.305116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.305341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.305584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.305631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.305832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.306072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.306113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.306344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.306602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.306638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.306912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.307127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.307161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.307585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.307884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.307937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.308228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.308454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.308501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.308773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.308988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.309020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.309236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.309486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.309519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.309777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.309961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.309995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.551 [2024-05-15 04:26:05.310191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.310392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.551 [2024-05-15 04:26:05.310425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.551 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.310647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.310902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.310955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.311298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.311531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.311577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.311755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.311965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.311999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.312281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.312488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.312521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.312727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.313019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.313054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.313290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.313499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.313532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.313796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.314004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.314051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.314292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.314507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.314540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.314764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.314986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.315020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.315264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.315508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.315542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.315738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.315952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.315991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.316214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.316494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.316526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.316759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.317001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.317035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.317284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.317478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.317509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.317718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.317947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.317987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.318214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.318471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.318519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.318821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.319045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.319078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.319266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.319534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.319569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.319795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.320040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.320082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.320272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.320513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.320547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.320741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.321012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.321052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.321412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.321687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.321725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.321982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.322228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.322280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.552 qpair failed and we were unable to recover it. 00:25:17.552 [2024-05-15 04:26:05.322548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.552 [2024-05-15 04:26:05.322761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.322795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.323054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.323313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.323345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.323570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.323780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.323815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.324039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.324253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.324302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.324541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.324716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.324747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.325022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.325253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.325299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.325568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.325787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.325819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.326066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.326289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.326324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.326660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.326927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.326991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.327189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.327397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.327431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.327650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.327889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.327943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.328147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.328354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.328386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.328631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.328851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.328884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.329155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.329366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.329401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.329668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.329910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.329966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.330190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.330389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.330421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.330658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.330834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.330866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.331149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.331393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.331439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.331639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.331890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.331949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.332285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.332517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.332564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.332833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.333081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.333116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.333353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.333593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.333625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.333801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.334030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.334079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.334321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.334553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.334586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.334831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.335084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.335120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.335380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.335598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.335632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.335872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.336179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.336222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.336462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.336709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.336754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.337014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.337202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.337229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.553 [2024-05-15 04:26:05.337405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.337584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.553 [2024-05-15 04:26:05.337608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.553 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.337832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.338062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.338089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.338261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.338449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.338475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.338680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.338893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.338917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.339131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.339302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.339327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.339497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.339717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.339742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.339967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.340176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.340201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.340399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.340598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.340623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.340827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.341027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.341054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.341265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.341479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.341506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.341729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.341953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.341979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.342202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.342424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.342449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.342646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.342811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.342835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.343031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.343259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.343284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.343483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.343673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.343697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.343895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.344080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.344106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.344334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.344555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.344580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.344780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.345063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.345089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.345283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.345455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.345481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.345677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.345874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.345899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.346088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.346360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.346387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.346631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.346864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.346889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.347064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.347249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.347274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.347585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.347821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.347846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.348084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.348292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.348317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.348520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.348700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.348723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.348936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.349110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.349136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.349317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.349513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.349537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.349748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.349937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.349961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.350165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.350356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.350382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.554 qpair failed and we were unable to recover it. 00:25:17.554 [2024-05-15 04:26:05.350627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.554 [2024-05-15 04:26:05.350822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.350849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.351054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.351287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.351312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.351476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.351647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.351672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.351869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.352060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.352085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.352303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.352493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.352518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.352833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.353006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.353032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.353232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.353400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.353427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.353628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.353868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.353892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.354095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.354293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.354317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.354507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.354767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.354792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.355008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.355203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.355228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.355425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.355730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.355769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.355966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.356129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.356154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.356354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.356551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.356575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.356742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.356937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.356962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.357165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.357427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.357451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.357675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.357873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.357900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.358117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.358312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.358337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.358579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.358775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.358800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.358985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.359176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.359207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.359411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.359585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.359610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.359833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.360054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.360080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.360272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.360442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.360470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.360674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.360848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.360872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.361072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.361275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.361300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.361532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.361755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.361780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.361955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.362137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.362162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.362470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.362644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.362668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.362863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.363066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.363092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.363258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.363461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.555 [2024-05-15 04:26:05.363490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.555 qpair failed and we were unable to recover it. 00:25:17.555 [2024-05-15 04:26:05.363688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.363858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.363899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.364149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.364311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.364336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.364557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.364751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.364777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.365034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.365201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.365227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.365453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.365649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.365674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.365873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.366065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.366090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.366256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.366439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.366463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.366674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.366873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.366898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.367128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.367293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.367318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.367485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.367709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.367739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.367908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.368115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.368140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.368362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.368556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.368580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.368780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.368975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.369001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.369179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.369374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.369400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.369598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.369795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.369820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.370013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.370185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.370223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.370407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.370599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.370624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.370823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.370994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.371020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.371186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.371379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.371405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.371586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.371834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.371863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.372151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.372371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.372396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.372560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.372757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.372783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.373043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.373224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.373250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.373441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.373672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.373698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.373954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.374158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.374184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.374417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.374640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.374664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.374849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.375053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.375079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.375278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.375473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.375498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.375665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.375862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.375888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.556 [2024-05-15 04:26:05.376157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.376373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.556 [2024-05-15 04:26:05.376397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.556 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.376608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.376774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.376799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.377006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.377205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.377230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.377402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.377600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.377627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.377812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.378018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.378043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.378270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.378507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.378530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.378799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.378993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.379018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.379185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.379397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.379421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.379628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.379845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.379869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.380044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.380246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.380273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.380496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.380666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.380691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.380890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.381092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.381119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.381320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.381488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.381512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.381723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.381970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.381996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.382176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.382360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.382385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.382614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.382786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.382811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.383006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.383198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.383223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.383420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.383585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.383626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.383830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.384003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.384028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.384226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.384422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.384447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.384623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.384820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.384845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.385049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.385225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.385250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.385479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.385650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.385674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.385864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.386062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.386088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.386299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.386562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.386587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.557 [2024-05-15 04:26:05.386810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.386988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.557 [2024-05-15 04:26:05.387015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.557 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.387224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.387437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.387461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.387662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.387887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.387912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.388123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.388319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.388344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.388511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.388681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.388706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.388975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.389170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.389195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.389391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.389583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.389608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.389852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.390054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.390080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.390276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.390496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.390521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.390718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.390884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.390908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.391094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.391324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.391349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.391548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.391810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.391836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.392018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.392251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.392277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.392478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.392711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.392736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.392962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.393139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.393164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.393381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.393604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.393629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.393833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.394055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.394081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.394275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.394517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.394542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.394740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.394939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.394964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.395146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.395410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.395435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.395677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.395889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.395914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.396135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.396328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.396353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.396573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.396766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.396791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.396992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.397189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.397214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.397450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.397669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.397694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.397857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.398044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.398070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.398269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.398491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.398516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.398718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.398915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.398946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.399121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.399342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.399367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.399556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.399869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.399893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.558 qpair failed and we were unable to recover it. 00:25:17.558 [2024-05-15 04:26:05.400093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-05-15 04:26:05.400372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.400397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.400593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.400792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.400817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.401018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.401246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.401272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.401462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.401679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.401704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.401902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.402096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.402122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.402324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.402516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.402541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.402767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.402989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.403018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.403224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.403439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.403462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.403630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.403862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.403885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.404095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.404259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.404297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.404510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.404705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.404728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.404919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.405163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.405188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.405353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.405516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.405542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.405719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.405912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.405942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.406150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.406350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.406375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.406544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.406740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.406765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.406962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.407133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.407158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.407394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.407602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.407628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.407797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.407992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.408018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.408188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.408390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.408415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.408613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.408780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.408807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.408995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.409217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.409242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.409444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.409642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.409667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.409889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.410099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.410125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.410324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.410537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.410562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.410768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.410968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.410994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.411171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.411366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.411401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.411610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.411804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.411828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.412033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.412259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.412284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.412484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.412673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-05-15 04:26:05.412698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.559 qpair failed and we were unable to recover it. 00:25:17.559 [2024-05-15 04:26:05.412893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.413063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.413090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.413291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.413483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.413508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.413695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.413917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.413947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.414171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.414339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.414364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.414561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.414763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.414788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.415025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.415226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.415265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.415480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.415706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.415735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.415904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.416087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.416112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.416311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.416485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.416510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.416742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.416954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.416980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.417179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.417378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.417404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.417642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.417810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.417834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.418007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.418203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.418228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.418433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.418601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.418626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.418820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.418984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.419009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.419198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.419385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.419410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.419602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.419803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.419831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.420047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.420220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.420245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.420412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.420635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.420660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.420885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.421089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.421114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.421310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.421498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.421525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.421722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.421918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.421948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.422154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.422353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.422377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.422560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.422780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.422805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.423020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.423251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.423275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.423481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.423673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.423714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.423951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.424154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.424179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.424400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.424587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.424611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.424803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.424999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.425024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.560 [2024-05-15 04:26:05.425246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.425467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-05-15 04:26:05.425492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.560 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.425715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.425887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.425911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.426110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.426333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.426357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.426521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.426708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.426733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.426953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.427168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.427193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.427390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.427584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.427610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.427797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.427980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.428005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.428201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.428370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.428397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.428596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.428824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.428850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.429044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.429236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.429260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.429453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.429653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.429678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.429874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.430053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.430080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.430241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.430434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.430458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.430651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.430815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.430839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.431036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.431270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.431295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.431516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.431713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.431738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.431940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.432116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.432143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.432346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.432511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.432535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.432759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.432981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.433012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.433183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.433379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.433404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.433606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.433801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.433825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.434025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.434201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.434229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.434403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.434576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.434601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.434802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.434990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.435015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.435215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.435408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.435433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.435634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.435838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-05-15 04:26:05.435863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.561 qpair failed and we were unable to recover it. 00:25:17.561 [2024-05-15 04:26:05.436037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.436242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.436267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.436493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.436664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.436689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.436912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.437089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.437119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.437293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.437488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.437512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.437737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.437964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.437990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.438186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.438409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.438434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.438633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.438826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.438850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.439046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.439213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.439240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.439443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.439631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.439656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.439873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.440064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.440089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.440291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.440523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.440547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.440776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.440947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.440973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.441137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.441310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.441334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.441520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.441676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.441700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.441952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.442138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.442162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.442361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.442551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.442576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.442793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.443000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.443026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.443227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.443406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.443433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.443606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.443806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.443830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.444084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.444292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.444315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.444525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.444689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.444715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.444910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.445143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.445168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.445373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.445568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.445593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.445769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.445967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.445993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.446193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.446381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.446404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.446618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.446841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.446866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.447044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.447235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.447263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.447462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.447671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.447697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.447879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.448074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.448099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.448299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.448499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-05-15 04:26:05.448524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.562 qpair failed and we were unable to recover it. 00:25:17.562 [2024-05-15 04:26:05.448716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.448880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.448906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.449113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.449281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.449306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.449504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.449671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.449697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.449868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.450049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.450074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.450258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.450457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.450482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.450677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.450847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.450876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.451064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.451267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.451291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.451459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.451623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.451647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.451818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.452027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.452053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.452243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.452431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.452458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.452677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.452883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.452910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.453089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.453298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.453322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.453525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.453744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.453768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.453944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.454178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.454203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.454376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.454542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.454566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.454792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.454986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.455011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.455230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.455470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.455495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.455684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.455909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.455939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.456164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.456389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.456413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.456583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.456773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.456797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.456997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.457219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.457245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.457415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.457609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.457634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.457876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.458081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.458107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.458297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.458490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.458520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.458719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.458915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.458956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.459151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.459385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.459409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.459637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.459862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.459888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.460076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.460260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.460285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.460489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.460733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.460758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.460954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.461126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-05-15 04:26:05.461151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.563 qpair failed and we were unable to recover it. 00:25:17.563 [2024-05-15 04:26:05.461354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.461541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.461565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.461767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.461967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.461992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.462191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.462401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.462426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.462624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.462811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.462835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.463062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.463261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.463285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.463487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.463692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.463717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.463943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.464114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.464138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.464330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.464504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.464529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.464703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.464900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.464925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.465136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.465375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.465400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.465606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.465807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.465831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.466052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.466222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.466247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.466421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.466589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.466614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.466806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.467021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.467046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.467249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.467473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.467498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.467671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.467845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.467873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.468093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.468282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.468307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.468531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.468724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.468748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.468941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.469137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.469161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.469371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.469568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.469593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.469811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.470037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.470063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.470226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.470437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.470463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.470678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.470872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.470897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.471101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.471277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.471303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.471529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.471727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.471751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.471960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.472135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.472160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.472332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.472527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.472552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.472746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.472965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.472990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.473180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.473558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.473582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.564 [2024-05-15 04:26:05.473786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.473953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-05-15 04:26:05.473979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.564 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.474173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.474369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.474394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.474588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.474781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.474805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.474999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.475230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.475256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.475448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.475627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.475653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.475880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.476119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.476145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.476379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.476577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.476601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.476820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.477010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.477035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.477233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.477423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.477448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.477645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.477816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.477840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.478037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.478237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.478262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.478458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.478652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.478678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.478876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.479074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.479100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.479324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.479486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.479510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.479727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.479988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.480014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.480212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.480442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.480471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.480692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.480898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.480925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.481160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.481383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.481407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.481601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.481797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.481821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.482000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.482173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.482197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.482424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.482596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.482622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.482820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.482995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.483021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.483209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.483403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.483427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.483636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.483836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.483862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.484072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.484243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.484269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.484437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.484657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.484681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.484882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.485079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.485106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.485307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.485481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.485508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.485698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.485902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.485926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.486115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.486317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.486342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.565 [2024-05-15 04:26:05.486538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.486706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-05-15 04:26:05.486733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.565 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.486938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.487127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.487152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.487350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.487544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.487569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.487760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.487960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.487986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.488181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.488399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.488424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.488618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.488780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.488805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.489007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.489239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.489263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.489460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.489624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.489664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.489873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.490131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.490156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.490353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.490577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.490602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.490799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.491019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.491045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.491233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.491428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.491452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.491646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.491840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.491865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.492058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.492258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.492283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.492485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.492676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.492701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.492894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.493076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.493102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.493329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.493498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.493522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.493697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.493893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.493917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.494091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.494261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.494285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.494449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.494673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.494698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.494895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.495123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.495149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.495319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.495511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.495535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.495736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.495907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.495938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.496115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.496317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.496341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.496564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.496763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.496787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.566 [2024-05-15 04:26:05.496956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.497155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-05-15 04:26:05.497180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.566 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.497356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.497534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.497559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.497760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.497960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.497985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.498184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.498388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.498412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.498586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.498777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.498801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.498993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.499162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.499187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.499405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.499605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.499630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.499800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.500004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.500029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.500222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.500398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.500422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.500587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.500782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.500807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.501012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.501210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.501235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.501427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.501624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.501654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.501877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.502240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.502265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.502457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.502660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.502686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.502897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.503097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.503122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.503317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.503492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.503517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.503716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.503909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.503940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.504144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.504337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.504362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.504575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.504785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.504809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.505041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.505207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.505232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.505431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.505627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.505651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.505818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.506033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.506063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.506257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.506423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.506448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.506642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.506830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.506855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.507044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.507215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.507240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.507420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.507613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.507637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.507835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.507996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.508021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.508215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.508434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.508459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.508669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.508839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.508865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.509062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.509256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.509281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.567 qpair failed and we were unable to recover it. 00:25:17.567 [2024-05-15 04:26:05.509515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.567 [2024-05-15 04:26:05.509708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.509733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.509940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.510115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.510139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.510348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.510523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.510548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.510748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.510946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.510971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.511144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.511371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.511396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.511595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.511768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.511792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.511966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.512145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.512169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.512367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.512567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.512592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.512780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.512955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.512980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.513178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.513351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.513377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.513586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.513759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.513784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.513986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.514185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.514212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.514416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.514615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.514639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.514817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.514983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.515008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.515209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.515380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.515405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.515603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.515805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.515831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.516031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.516205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.516229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.516424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.516613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.516638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.516863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.517037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.517063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.517249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.517423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.517450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.517651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.517878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.517905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.518134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.518304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.518329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.518558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.518760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.518785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.518950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.519143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.519171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.519370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.519570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.519595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.519758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.519955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.519980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.520177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.520337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.520362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.520558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.520728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.520753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.520972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.521173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.521198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.521419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.521640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.521664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.568 qpair failed and we were unable to recover it. 00:25:17.568 [2024-05-15 04:26:05.521833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.568 [2024-05-15 04:26:05.522034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.522059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.522222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.522393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.522417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.522605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.522805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.522832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.523030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.523227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.523252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.523481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.523701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.523726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.523949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.524161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.524185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.524383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.524582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.524607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.524828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.525021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.525046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.525245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.525410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.525435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.525655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.525826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.525853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.526019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.526208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.526233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.526431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.526597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.526621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.526823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.527016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.527046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.527221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.527415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.527440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.527638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.527808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.527833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.528001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.528201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.528226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.528402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.528606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.528631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.528808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.529006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.529032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.529197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.529393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.529418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.529587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.529783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.529807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.529998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.530169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.530194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.530369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.530565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.530589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.530762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.530957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.530991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.531179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.531430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.531454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.531649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.531815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.531853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.532104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.532300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.532325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.532520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.532696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.532721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.532938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.533166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.533191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.533408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.533604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.533629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.569 qpair failed and we were unable to recover it. 00:25:17.569 [2024-05-15 04:26:05.533824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.533994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.569 [2024-05-15 04:26:05.534021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.534198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.534395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.534422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.534593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.534782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.534807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.535011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.535202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.535241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.535491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.535713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.535738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.535913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.536091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.536117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.536307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.536528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.536553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.536775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.536973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.536998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.537199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.537391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.537416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.537595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.537766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.537792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.538020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.538216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.538240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.538463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.538686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.538711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.538917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.539150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.539178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.539351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.539565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.539591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.539779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.539973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.539998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.540191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.540374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.540398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.540607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.540808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.540832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.541067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.541237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.541261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.541462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.541654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.541678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.541877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.542054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.542080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.542259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.542444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.542469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.542656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.542849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.542874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.543072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.543241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.543266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.543429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.543612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.543637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.543840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.544017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.544042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.570 qpair failed and we were unable to recover it. 00:25:17.570 [2024-05-15 04:26:05.544235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-05-15 04:26:05.544435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.544461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.544652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.544942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.544968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.545144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.545310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.545334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.545533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.545749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.545773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.545966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.546136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.546161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.546358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.546534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.546559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.546733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.546946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.546972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.547153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.547328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.547353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.547549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.547741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.547766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.548066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.548231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.548260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.548467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.548665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.548690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.548881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.549055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.549080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.549251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.549469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.549494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.549656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.549834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.549859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.550060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.550236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.550261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.571 [2024-05-15 04:26:05.550429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.550630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-05-15 04:26:05.550655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.571 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.550857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.551044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.551069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.551269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.551430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.551456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.551655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.551853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.551878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.552086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.552289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.552314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.552519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.552691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.552723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.552922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.553119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.553144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.553369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.553575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.553600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.553794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.553977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.554003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.554178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.554369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.554393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.554592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.554786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.554810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.555034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.555234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.555259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.555424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.555612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.555638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.555852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.556026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.556052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.556220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.556413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.556438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.556621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.556821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.556845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.557020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.557216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.557241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.557415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.557584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.841 [2024-05-15 04:26:05.557610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.841 qpair failed and we were unable to recover it. 00:25:17.841 [2024-05-15 04:26:05.557807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.557981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.558008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.558197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.558399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.558424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.558622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.558786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.558810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.558988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.559183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.559210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.559378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.559575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.559599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.559795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.559964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.559989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.560158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.560367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.560394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.560596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.560771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.560797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.560969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.561146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.561171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.561347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.561540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.561564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.561764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.561963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.561989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.562179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.562375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.562400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.562595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.562789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.562813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.563004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.563229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.563253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.563474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.563670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.563697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.563893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.564135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.564160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.564358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.564546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.564571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.564764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.564994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.565020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.565222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.565464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.565488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.565690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.565912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.565943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.566121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.566315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.566340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.566535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.566756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.566782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.566982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.567200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.567225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.567418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.567603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.567627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.567855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.568076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.568102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.568336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.568515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.568541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.568740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.568939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.568965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.569156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.569313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.569342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.569534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.569726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.569750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.569943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.570166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.570191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.570371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.570577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.570602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.570799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.570957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.570982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.571185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.571375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.571400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.571597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.571763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.571788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.842 qpair failed and we were unable to recover it. 00:25:17.842 [2024-05-15 04:26:05.571956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.842 [2024-05-15 04:26:05.572157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.572181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.572356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.572579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.572603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.572819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.573040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.573066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.573263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.573427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.573457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.573623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.573820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.573844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.574016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.574194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.574219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.574449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.574623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.574647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.574827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.575019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.575045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.575234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.575390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.575414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.575580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.575741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.575765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.575961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.576120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.576145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.576337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.576560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.576585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.576783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.576984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.577010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.577174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.577397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.577421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.577615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.577830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.577854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.578054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.578233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.578261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.578433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.578607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.578633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.578799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.578993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.579018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.579189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.579384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.579409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.579602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.579796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.579821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.580047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.580221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.580245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.580440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.580668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.580692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.580889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.581116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.581141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.581327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.581532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.581559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.581767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.581965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.581991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.582151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.582316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.582341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.582535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.582753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.582778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.583001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.583192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.583217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.583445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.583616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.583642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.583833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.584008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.584034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.584239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.584463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.584488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.584691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.584879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.584904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.585078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.585265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.585288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.585494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.585662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.585686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.843 [2024-05-15 04:26:05.585885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.586089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.843 [2024-05-15 04:26:05.586114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.843 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.586285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.586482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.586508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.586680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.586857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.586882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.587081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.587265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.587290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.587490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.587659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.587683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.587860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.588039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.588065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.588236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.588410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.588435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.588641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.588819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.588844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.589027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.589226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.589251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.589423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.589593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.589618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.589783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.589985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.590013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.590182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.590383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.590408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.590604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.590794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.590819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.590990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.591157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.591181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.591352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.591574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.591598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.591770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.591991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.592017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.592181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.592376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.592400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.592620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.592816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.592841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.593052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.593224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.593251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.593470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.593641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.593666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.593835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.594003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.594033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.594222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.594409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.594434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.594631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.594802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.594829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.595000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.595224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.595251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.595451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.595630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.595655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.595819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.596019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.596046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.596252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.596446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.596470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.596667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.596835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.596861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.597033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.597232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.597259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.597434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.597599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.597623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.597827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.598049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.598074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.598284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.598480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.598506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.598698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.598921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.598953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.599181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.599376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.599401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.844 qpair failed and we were unable to recover it. 00:25:17.844 [2024-05-15 04:26:05.599598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.844 [2024-05-15 04:26:05.599799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.599824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.600020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.600214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.600239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.600411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.600599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.600624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.600852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.601021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.601046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.601216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.601376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.601400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.601591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.601780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.601803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.602005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.602229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.602254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.602485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.602660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.602687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.602880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.603067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.603094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.603271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.603473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.603497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.603690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.603892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.603917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.604095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.604270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.604294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.604504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.604676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.604700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.604874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.605078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.605104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.605277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.605510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.605535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.605704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.605871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.605896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.606069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.606256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.606281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.606476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.606672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.606697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.606918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.607093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.607118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.607316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.607482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.607507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.607704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.607896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.607920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.608117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.608315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.608340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.608540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.608711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.608737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.608945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.609166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.609191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.609419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.609611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.609636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.609860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.610049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.610074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.610275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.610499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.610525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.610746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.610953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.610980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.845 qpair failed and we were unable to recover it. 00:25:17.845 [2024-05-15 04:26:05.611152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.845 [2024-05-15 04:26:05.611317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.611342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.611515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.611708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.611732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.611927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.612160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.612184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.612347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.612521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.612545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.612742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.612962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.612987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.613165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.613383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.613407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.613597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.613760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.613785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.613984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.614184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.614209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.614398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.614573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.614598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.614818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.615010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.615039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.615236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.615427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.615453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.615673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.615892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.615917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.616116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.616334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.616359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.616583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.616813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.616838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.617027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.617241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.617266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.617459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.617654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.617678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.617875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.618075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.618099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.618275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.618444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.618468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.618679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.618874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.618899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.619137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.619335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.619359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.619535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.619696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.619721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.619888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.620089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.620114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.620296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.620494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.620519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.620723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.620891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.620916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.621086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.621272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.621297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.621496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.621697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.621721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.621919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.622123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.622148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.622347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.622557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.622582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.622778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.622957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.622983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.623161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.623354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.623379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.623568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.623801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.623825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.624008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.624199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.624224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.624420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.624651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.624675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.624900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.625079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.625104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.625333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.625556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.846 [2024-05-15 04:26:05.625581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.846 qpair failed and we were unable to recover it. 00:25:17.846 [2024-05-15 04:26:05.625751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.625955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.625983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.626184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.626404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.626429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.626628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.626787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.626811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.627013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.627210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.627235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.627435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.627662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.627687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.627921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.628110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.628134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.628327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.628529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.628554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.628749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.628981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.629006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.629210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.629402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.629427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.629602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.629767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.629791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.629959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.630140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.630164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.630361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.630547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.630572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.630744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.630903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.630928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.631140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.631339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.631364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.631538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.631711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.631737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.631908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.632143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.632168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.632361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.632528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.632553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.632761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.632961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.632986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.633208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.633376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.633399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.633593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.633784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.633810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.634012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.634190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.634214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.634389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.634596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.634621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.634818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.634988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.635012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.635187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.635386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.635413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.635608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.635829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.635854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.636025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.636247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.636276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.636476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.636676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.636700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.636894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.637066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.637092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.637265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.637433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.637458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.637654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.637831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.637856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.638044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.638244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.638269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.638441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.638603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.638628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.638830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.639025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.639050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.639224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.639393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.639417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.847 qpair failed and we were unable to recover it. 00:25:17.847 [2024-05-15 04:26:05.639608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.847 [2024-05-15 04:26:05.639803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.639828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.639996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.640190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.640214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.640408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.640605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.640630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.640803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.640999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.641025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.641223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.641417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.641441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.641603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.641776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.641818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.642030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.642205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.642231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.642432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.642603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.642627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.642794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.642966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.642992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.643191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.643356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.643381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.643555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.643748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.643773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.643971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.644164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.644189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.644416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.644635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.644659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.644887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.645108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.645133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.645330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.645557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.645582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.645804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.646006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.646031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.646205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.646385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.646411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.646638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.646857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.646881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.647108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.647332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.647357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.647569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.647734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.647760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.647925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.648154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.648179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.648375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.648599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.648623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.648813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.649014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.649039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.649242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.649434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.649458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.649660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.649882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.649907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.650115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.650313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.650337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.650496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.650701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.650725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.650952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.651146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.651171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.651376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.651540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.651565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.651786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.652011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.652036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.652254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.652477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.652502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.652723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.652909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.652938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.653114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.653312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.653337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.653504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.653726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.653750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.653980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.654179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.848 [2024-05-15 04:26:05.654204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.848 qpair failed and we were unable to recover it. 00:25:17.848 [2024-05-15 04:26:05.654429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.654620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.654646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.654823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.655042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.655068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.655236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.655410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.655438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.655663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.655835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.655860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.656024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.656240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.656265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.656496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.656677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.656701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.656896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.657080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.657105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.657311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.657510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.657539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.657732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.657906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.657935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.658160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.658360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.658385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.658578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.658764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.658788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.658986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.659185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.659210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.659405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.659571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.659598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.659774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.659939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.659964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.660136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.660337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.660362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.660556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.660731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.660757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.660979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.661150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.661175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.661371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.661611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.661638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.661865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.662063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.662089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.662285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.662509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.662534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.662732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.662905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.662947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.663124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.663325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.663350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.663521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.663789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.663813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.663991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.664182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.664209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.664404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.664615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.849 [2024-05-15 04:26:05.664639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.849 qpair failed and we were unable to recover it. 00:25:17.849 [2024-05-15 04:26:05.664852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.665120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.665145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.665327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.665549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.665574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.665747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.665945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.665970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.666153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.666350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.666375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.666577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.666775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.666799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.667005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.667198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.667224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.667446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.667614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.667639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.667836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.668030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.668056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.668260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.668481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.668505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.668701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.668987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.669012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.669211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.669403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.669428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.669601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.669801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.669825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.669997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.670166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.670191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.670391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.670580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.670605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.670805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.671010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.671035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.671269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.671461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.671485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.671710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.671896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.671921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.672108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.672331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.672357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.672558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.672781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.672807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.673032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.673231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.673256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.673450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.673666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.673691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.673887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.674066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.674092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.674263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.674491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.674516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.674712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.674888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.674913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.675120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.675283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.675307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.675504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.675698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.675723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.675888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.676089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.850 [2024-05-15 04:26:05.676114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.850 qpair failed and we were unable to recover it. 00:25:17.850 [2024-05-15 04:26:05.676288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.676484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.676509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.676707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.676939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.676965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.677137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.677308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.677333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.677530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.677705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.677746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.677960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.678183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.678210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.678417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.678596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.678621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.678794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.679060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.679086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.679281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.679476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.679501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.679699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.679890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.679915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.680118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.680281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.680305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.680494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.680692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.680717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.680915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.681155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.681180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.681384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.681544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.681569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.681732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.681927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.681959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.682123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.682337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.682362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.682584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.682790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.682815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.683011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.683202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.683231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.683395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.683559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.683585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.683775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.683972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.683998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.684170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.684397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.684422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.684614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.684808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.684833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.685031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.685252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.685277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.685477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.685698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.685723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.685886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.686063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.686088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.686298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.686516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.686540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.851 qpair failed and we were unable to recover it. 00:25:17.851 [2024-05-15 04:26:05.686708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.851 [2024-05-15 04:26:05.686906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.686945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.687143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.687343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.687368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.687562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.687751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.687776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.687977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.688209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.688233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.688460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.688659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.688684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.688879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.689066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.689091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.689311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.689472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.689496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.689699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.689865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.689891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.690089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.690290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.690314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.690514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.690709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.690734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.690903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.691097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.691123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.691316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.691480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.691506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.691700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.691866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.691891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.692095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.692298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.692322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.692540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.692700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.692725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.692903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.693163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.693188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.693355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.693528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.693553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.693724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.693891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.693916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.694097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.694264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.694288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.694515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.694686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.694712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.694917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.695158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.695183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.695375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.695540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.695564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.695767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.695969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.695994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.696170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.696339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.696365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.696540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.696768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.696794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.696988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.697167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.697192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.697391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.697563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.852 [2024-05-15 04:26:05.697587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.852 qpair failed and we were unable to recover it. 00:25:17.852 [2024-05-15 04:26:05.697796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.697995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.698021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.698194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.698420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.698445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.698619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.698819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.698844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.699050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.699248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.699272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.699443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.699666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.699690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.699859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.700097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.700123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.700287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.700447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.700474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.700703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.700895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.700919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.701097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.701294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.701319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.701514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.701711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.701735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.701934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.702136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.702161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.702362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.702556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.702580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.702779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.702960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.702987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.703213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.703414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.703440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.703641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.703841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.703865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.704064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.704285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.704314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.704543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.704744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.704770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.704976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.705178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.705202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.705375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.705572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.705596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.705777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.705947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.705974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.706178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.706351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.706375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.706585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.706757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.706783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.706981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.707154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.707179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.707379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.707578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.707604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.707802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.707980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.708007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.708186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.708356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.708380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.708554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.708784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.853 [2024-05-15 04:26:05.708808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.853 qpair failed and we were unable to recover it. 00:25:17.853 [2024-05-15 04:26:05.709017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.709194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.709221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.709413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.709578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.709603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.709768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.709970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.709995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.710200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.710365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.710390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.710591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.710785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.710809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.710978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.711148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.711172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.711342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.711540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.711564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.711732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.711900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.711925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.712148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.712345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.712370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.712551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.712745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.712769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.712941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.713114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.713138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.713317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.713544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.713569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.713742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.713941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.713966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.714146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.714371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.714395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.714559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.714730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.714754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.854 qpair failed and we were unable to recover it. 00:25:17.854 [2024-05-15 04:26:05.714952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.715180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.854 [2024-05-15 04:26:05.715205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.715405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.715617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.715642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.715865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.716037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.716062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.716230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.716447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.716471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.716673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.716875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.716899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.717102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.717295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.717323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.717522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.717741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.717768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.718007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.718229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.718254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.718431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.718624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.718651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.718851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.719047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.719073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.719270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.719466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.719491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.719693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.719879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.719903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.720115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.720307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.720331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.720500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.720689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.720715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.720916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.721127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.721153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.721351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.721550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.721574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.721772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.721963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.721988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.722180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.722375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.722400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.722595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.722789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.722814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.723025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.723227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.723252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.723443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.723644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.723669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.723840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.724043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.724068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.724297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.724486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.724511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.724679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.724875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.724900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.855 qpair failed and we were unable to recover it. 00:25:17.855 [2024-05-15 04:26:05.725084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.855 [2024-05-15 04:26:05.725276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.725307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.725540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.725735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.725760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.725954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.726123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.726147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.726314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.726506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.726532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.726698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.726898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.726924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.727102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.727262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.727286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.727485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.727711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.727736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.727938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.728149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.728174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.728352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.728578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.728603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.728832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.729033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.729059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.729256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.729454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.729486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.729686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.729876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.729901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.730104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.730284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.730309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.730480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.730703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.730727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.730906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.731111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.731138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.731312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.731523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.731548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.731744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.731967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.731992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.732194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.732367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.732392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.732590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.732787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.732813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.733011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.733232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.733258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.733457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.733676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.733702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.733866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.734025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.734050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.734221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.734412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.734437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.734637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.734799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.734823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.735019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.735180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.735204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.856 qpair failed and we were unable to recover it. 00:25:17.856 [2024-05-15 04:26:05.735392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.856 [2024-05-15 04:26:05.735589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.735614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.735816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.736010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.736035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.736230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.736420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.736444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.736663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.736894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.736918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.737123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.737292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.737317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.737540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.737732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.737757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.737977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.738189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.738213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.738405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.738626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.738651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.738839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.739032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.739070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.739256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.739452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.739477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.739670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.739862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.739887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.740092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.740281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.740306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.740479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.740699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.740724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.740921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.741123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.741148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.741339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.741561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.741585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.741755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.741922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.741952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.742128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.742330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.742355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.742525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.742697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.742722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.742894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.743091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.743117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.743292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.743468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.743494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.743666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.743860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.743885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.744086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.744254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.744278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.744453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.744622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.744646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.744820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.745019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.745043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.745238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.745405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.745430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.857 qpair failed and we were unable to recover it. 00:25:17.857 [2024-05-15 04:26:05.745608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.857 [2024-05-15 04:26:05.745769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.745794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.745992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.746171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.746195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.746387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.746588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.746612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.746804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.746975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.747000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.747174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.747364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.747390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.747557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.747753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.747779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.747976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.748164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.748189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.748386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.748583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.748608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.748776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.748945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.748971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.749151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.749358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.749382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.749602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.749793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.749817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.749994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.750190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.750218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.750397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.750619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.750644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.750835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.751048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.751073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.751246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.751449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.751474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.751648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.751821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.751845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.752037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.752215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.752239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.752406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.752627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.752651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.752849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.753074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.753100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.753300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.753462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.753489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.753692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.753891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.753915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.754086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.754262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.754287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.754487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.754682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.754707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.754871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.755038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.755063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.755244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.755445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.755471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.755707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.755887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.755911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.756115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.756291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.756316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.858 qpair failed and we were unable to recover it. 00:25:17.858 [2024-05-15 04:26:05.756485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.756679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.858 [2024-05-15 04:26:05.756703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.756907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.757110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.757135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.757303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.757489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.757513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.757778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.758057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.758082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.758284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.758491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.758516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.758717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.758910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.758940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.759137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.759322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.759347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.759515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.759714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.759739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.759913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.760123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.760148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.760354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.760530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.760554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.760733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.760938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.760963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.761138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.761339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.761363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.761538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.761737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.761762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.761966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.762166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.762191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.762393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.762587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.762612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.762782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.762978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.763003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.763174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.763416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.763441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.763649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.763850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.763875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.764052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.764246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.764271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.764442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.764641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.764666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.859 [2024-05-15 04:26:05.764859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.765079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.859 [2024-05-15 04:26:05.765104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.859 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.765302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.765500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.765524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.765725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.765921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.765952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.766117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.766281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.766305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.766524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.766711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.766735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.766900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.767114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.767139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.767337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.767535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.767559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.767758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.767937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.767963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.768163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.768387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.768411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.768632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.768851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.768876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.769088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.769312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.769336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.769533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.769727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.769753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.769942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.770099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.770124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.770288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.770479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.770503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.770725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.770926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.770965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.771139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.771332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.771362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.771585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.771810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.771834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.772024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.772244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.772269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.772466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.772657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.772683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.772882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.773060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.773086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.773309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.773535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.773560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.773734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.773935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.773960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.774165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.774359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.774385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.774581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.774804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.774829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.775055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.775255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.775286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.775457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.775688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.775713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.776008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.776231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.776256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.776423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.776646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.776670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.860 qpair failed and we were unable to recover it. 00:25:17.860 [2024-05-15 04:26:05.776894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.860 [2024-05-15 04:26:05.777066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.777091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.777290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.777482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.777507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.777707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.777906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.777938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.778112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.778311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.778335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.778507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.778707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.778731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.778940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.779117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.779141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.779320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.779492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.779517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.779687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.779908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.779940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.780138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.780333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.780359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.780532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.780705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.780732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.780901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.781078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.781104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.781294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.781467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.781492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.781657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.781827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.781851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.782029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.782209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.782234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.782427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.782615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.782642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.782809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.783009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.783034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.783230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.783439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.783463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.783660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.783884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.783908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.784079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.784274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.784299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.784464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.784661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.784688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.784887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.785086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.785111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.785310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.785498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.785523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.785744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.785940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.785965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.786162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.786359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.861 [2024-05-15 04:26:05.786385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.861 qpair failed and we were unable to recover it. 00:25:17.861 [2024-05-15 04:26:05.786606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.786829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.786855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.787076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.787240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.787264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.787432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.787628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.787653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.787827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.788027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.788052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.788272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.788442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.788467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.788638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.788838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.788864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.789063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.789256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.789281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.789476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.789689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.789713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.789884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.790089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.790115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.790290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.790498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.790523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.790726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.790891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.790917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.791139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.791362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.791386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.791584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.791783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.791807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.791983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.792195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.792221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.792441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.792608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.792639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.792812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.793013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.793039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.793241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.793415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.793440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.793617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.793856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.793881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.794056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.794255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.794280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.794453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.794624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.794648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.794839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.795017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.795046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.795229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.795403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.795428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.795632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.795838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.862 [2024-05-15 04:26:05.795862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.862 qpair failed and we were unable to recover it. 00:25:17.862 [2024-05-15 04:26:05.796085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.796284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.796309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.796480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.796648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.796673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.796872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.797051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.797076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.797247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.797434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.797459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.797630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.797802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.797827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.798028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.798225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.798250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.798414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.798580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.798605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.798768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.798959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.798984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.799180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.799378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.799402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.799595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.799789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.799814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.800010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.800171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.800196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.800395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.800557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.800582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.801435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.801650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.801676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.801879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.802062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.802088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.802284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.802462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.802487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.802645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.802851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.802877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.803096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.803290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.803314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.803478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.803673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.803697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.803916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.804149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.804174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.804374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.804590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.804613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.804820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.805044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.805070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.805268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.805438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.805463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.805662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.805858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.805883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.806101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.806297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.806321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.806517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.806737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.806761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.806962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.807162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.863 [2024-05-15 04:26:05.807187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.863 qpair failed and we were unable to recover it. 00:25:17.863 [2024-05-15 04:26:05.807381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.807580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.807606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.807803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.808013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.808038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.808210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.808426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.808451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.808680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.808868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.808892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.809114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.809318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.809342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.809502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.809701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.809726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.809905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.810108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.810136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.810352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.810516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.810541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.810741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.810941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.810967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.811149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.811358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.811385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.811582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.811753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.811778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.811976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.812144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.812168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.812389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.812574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.812598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.812826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.813001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.813027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.813226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.813450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.813474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.813675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.813847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.813872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.814046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.814216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.814259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.814470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.814659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.814684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.814903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.815157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.815183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.815355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.815574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.815600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.815799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.815996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.816022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.816220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.816439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.816464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.816652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.816816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.816842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.864 qpair failed and we were unable to recover it. 00:25:17.864 [2024-05-15 04:26:05.817041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.817210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.864 [2024-05-15 04:26:05.817235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.817436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.817595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.817634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.817850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.818039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.818065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.818257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.818453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.818481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.818647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.818877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.818902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.819080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.819285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.819310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.819510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.819680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.819706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.819906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.820122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.820147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.820316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.820510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.820535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.820761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.821597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.821626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.821831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.822030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.822056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.822220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.822391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.822418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.822613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.822784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.822809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.823038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.823203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.823229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.823431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.823658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.823684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.823877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.824073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.824099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.824268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.824456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.824480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.824702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.824920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.824954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.825194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.825391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.825416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.825637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.825828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.825853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.865 [2024-05-15 04:26:05.826060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.826236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.865 [2024-05-15 04:26:05.826260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.865 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.826442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.826638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.826663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.826862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.827063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.827088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.827284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.827507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.827532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.827698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.827922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.827952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.828130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.828344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.828370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.828539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.828738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.828764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.828960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.829199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.829225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.829439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.829634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.829658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.829825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.829995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.830021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.830205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.830430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.830455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.830650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.830843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.830868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.831070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.831245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.831270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.831465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.831656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.831680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.831848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.832073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.832099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.832321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.832551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.832576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.832749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.832948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.832975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.833154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.833384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.833411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.833613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.833807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.833831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.834022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.834219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.834244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.834468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.834660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.834685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.834876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.835068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.835093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.835324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.835519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.835543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.835705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.835878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.835904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.836106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.836290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.836315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.836503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.836704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.836728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.866 [2024-05-15 04:26:05.836897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.837099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.866 [2024-05-15 04:26:05.837123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.866 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.837310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.837558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.837583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.837790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.838043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.838072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.838274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.838467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.838492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.838684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.838902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.838927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.839138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.839332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.839355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.839569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.839760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.839784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.839986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.840196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.840224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.840420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.840622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.840653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.840862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.841047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.841073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.841278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.841447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.841472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.841675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.841890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.841916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.842127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.842316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.842342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.842540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.842815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.842840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.843024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.843201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.843226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.843402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.843599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.843624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.843802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.843998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.844025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.844196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.844395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.844420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:17.867 [2024-05-15 04:26:05.844591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.844765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.867 [2024-05-15 04:26:05.844792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:17.867 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.845008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.845207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.845232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.845434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.845633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.845658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.845836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.846010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.846036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.846206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.846427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.846453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.846655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.846820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.846845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.847063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.847245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.847271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.847463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.847652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.847678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.847845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.848022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.848048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.848326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.848553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.848577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.848772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.848979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.849005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.849214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.849420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.849446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.849669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.849862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.849887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.850072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.850272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.850297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.850495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.850693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.850718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.850910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.851084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.851110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.851279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.851504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.851528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.851725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.851892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.140 [2024-05-15 04:26:05.851916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.140 qpair failed and we were unable to recover it. 00:25:18.140 [2024-05-15 04:26:05.852117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.852324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.852349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.852545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.852805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.852830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.853038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.853318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.853343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.853543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.853709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.853735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.853969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.854169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.854195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.854388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.854623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.854648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.854845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.855041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.855066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.855234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.855423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.855448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.855619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.855811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.855835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.856106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.856274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.856298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.856483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.856757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.856781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.856995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.857187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.857212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.857385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.857551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.857576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.857797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.857983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.858008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.858236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.858415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.858440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.858612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.858786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.858811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.859009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.859279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.859304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.859498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.859676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.859700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.859897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.860075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.860102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.860304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.860501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.860525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.860716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.860889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.860913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.861091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.861249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.861273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.861444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.861636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.861662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.861837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.862006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.862036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.862202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.862477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.862502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.862701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.862900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.862924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.863097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.863257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.863282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.863482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.863703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.863728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.863924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.864217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.864242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.141 qpair failed and we were unable to recover it. 00:25:18.141 [2024-05-15 04:26:05.865183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.865419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.141 [2024-05-15 04:26:05.865446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.865654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.865855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.865881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.866102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.866327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.866352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.866526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.866747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.866772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.866975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.867168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.867192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.867426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.867607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.867647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.867822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.867996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.868022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.868225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.868447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.868472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.868668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.868886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.868911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.869113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.869290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.869314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.869508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.869735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.869760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.869938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.870162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.870187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.870379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.870572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.870598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.870794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.870997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.871022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.871189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.871412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.871439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.871671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.871951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.871977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.872171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.872362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.872386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.872599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.872779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.872804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.873018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.873212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.873237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.873432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.873606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.873634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.873828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.873997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.874023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.874219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.874439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.874464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.874631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.874825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.874849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.875127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.875301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.875326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.875542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.875710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.875737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.875945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.876145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.876171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.876408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.876603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.876627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.876805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.877005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.877030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.877239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.877401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.877429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.877608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.877850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.877876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.142 qpair failed and we were unable to recover it. 00:25:18.142 [2024-05-15 04:26:05.878049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.142 [2024-05-15 04:26:05.878214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.878239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.878461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.878691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.878717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.878892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.879100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.879126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.879323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.879518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.879543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.879746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.879971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.879998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.880173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.880380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.880406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.880630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.880803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.880827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.881031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.881201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.881225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.881429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.881624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.881648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.881829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.882001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.882026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.882223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.882419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.882447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.882639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.882832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.882856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.883029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.883233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.883257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.883431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.883619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.883643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.883863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.884034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.884069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.884262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.884461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.884490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.884687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.884896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.884924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.885151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.885356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.885390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.885617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.885814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.885838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.886014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.886217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.886243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.886445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.886642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.886667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.886869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.887093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.887118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.887299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.887463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.887487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.887685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.887880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.887905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.888099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.888375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.888400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.888597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.888796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.888825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.889048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.889216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.889241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.889469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.889667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.889691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.889863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.890141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.890168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.143 [2024-05-15 04:26:05.890371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.890567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.143 [2024-05-15 04:26:05.890591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.143 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.890786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.890979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.891005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.891200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.891374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.891398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.891593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.891790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.891815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.891985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.892156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.892181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.892347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.892548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.892572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.892775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.892976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.893002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.893281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.893454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.893479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.893675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.893865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.893892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.894077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.894279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.894304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.894498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.894674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.894699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.894921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.895094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.895120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.895289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.895458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.895482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.895708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.895877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.895901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.896109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.896298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.896322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.896490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.896683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.896709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.896881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.897084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.897109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.897313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.897483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.897507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.897726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.897896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.897922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.898126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.898327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.898352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.898526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.898695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.898720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.899003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.899171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.899195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.899392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.899606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.899630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.144 qpair failed and we were unable to recover it. 00:25:18.144 [2024-05-15 04:26:05.899832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.900055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.144 [2024-05-15 04:26:05.900080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.900254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.900451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.900477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.900680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.900845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.900872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.901073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.901271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.901296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.901486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.901685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.901710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.901990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.902158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.902183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.902373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.902563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.902588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.902788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.903009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.903035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.903248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.903439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.903463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.903662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.903829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.903853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.904127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.904421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.904446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.904642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.904877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.904902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.905098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.905265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.905290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.905468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.905657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.905682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.905913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.906103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.906129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.906328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.906501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.906526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.906717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.906880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.906904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.907080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.907277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.907301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.907508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.907679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.907704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.907899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.908105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.908129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.908409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.908659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.908684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.908883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.909084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.909110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.153 qpair failed and we were unable to recover it. 00:25:18.153 [2024-05-15 04:26:05.909305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.153 [2024-05-15 04:26:05.909505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.909529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.909726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.909900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.909925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.910094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.910287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.910315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.910511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.910708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.910733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.910911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.911107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.911132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.911334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.911531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.911555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.911832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.912042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.912067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.912266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.912454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.912480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.912685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.912884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.912908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.913121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.913322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.913347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.913529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.913702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.913728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.913894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.914102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.914127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.914299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.914495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.914519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.914721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.914916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.914952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.915153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.915353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.915377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.915554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.915747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.915772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.915999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.916199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.916223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.916451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.916619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.916643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.916842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.917005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.917030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.917197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.917357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.917381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.917576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.917773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.917797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.918005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.918203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.918227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.918421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.918616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.918640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.918846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.919025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.919052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.919224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.919416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.919441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.919661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.919820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.919845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.920070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.920237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.920262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.920455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.920615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.920641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.920838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.921037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.921063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.921302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.921520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.921544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.921734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.921952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.921978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.154 qpair failed and we were unable to recover it. 00:25:18.154 [2024-05-15 04:26:05.922151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.154 [2024-05-15 04:26:05.922373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.922398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.922619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.922814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.922839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.923036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.923257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.923281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.923479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.923665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.923689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.923914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.924114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.924139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.924330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.924503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.924527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.924722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.924943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.924969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.925174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.925338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.925361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.925584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.925803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.925827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.926029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.926225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.926249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.926474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.926674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.926698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.926874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.927048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.927073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.927270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.927450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.927475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.927751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.927951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.927976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.928184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.928378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.928404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.928599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.928802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.928826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.929030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.929259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.929284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.929451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.929683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.929707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.929909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.930089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.930113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.930315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.930510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.930534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.930731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.930933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.930958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.931125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.931298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.931322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.931521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.931710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.931738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.931942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.932119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.932144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.932318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.932510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.932534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.932732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.932921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.932951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.933115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.933348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.933373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.933542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.933732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.933756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.933989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.934190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.934215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.934418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.934592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.934616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.934837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.935004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.935028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.935239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.935409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.935433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.155 qpair failed and we were unable to recover it. 00:25:18.155 [2024-05-15 04:26:05.935628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.155 [2024-05-15 04:26:05.935828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.935852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.936133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.936328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.936353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.936518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.936690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.936714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.936920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.937112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.937137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.937340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.937532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.937556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.937749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.937948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.937973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.938148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.938384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.938409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.938609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.938804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.938828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.938998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.939174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.939201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.939398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3491988 Killed "${NVMF_APP[@]}" "$@" 00:25:18.156 [2024-05-15 04:26:05.939678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.939704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.939895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.940113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.940144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:18.156 [2024-05-15 04:26:05.940345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.156 [2024-05-15 04:26:05.940518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.940546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:18.156 [2024-05-15 04:26:05.940770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.156 [2024-05-15 04:26:05.940964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.940990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.941212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.941381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.941406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.941602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.941791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.941816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.941980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.942172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.942197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.942359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.942552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.942576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.942774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.942979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.943004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.943194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.943388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.943413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.943602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.943797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.943821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.944023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.944190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.944216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.944439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3492536 00:25:18.156 [2024-05-15 04:26:05.944609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.944635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3492536 00:25:18.156 [2024-05-15 04:26:05.944827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3492536 ']' 00:25:18.156 [2024-05-15 04:26:05.945023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.945049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.156 [2024-05-15 04:26:05.945271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:18.156 [2024-05-15 04:26:05.945437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.156 [2024-05-15 04:26:05.945462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:18.156 [2024-05-15 04:26:05.945663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 04:26:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.156 [2024-05-15 04:26:05.945887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.945913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.946149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.946343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.946370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.946541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.946732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.946758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.156 qpair failed and we were unable to recover it. 00:25:18.156 [2024-05-15 04:26:05.947042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.947238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.156 [2024-05-15 04:26:05.947263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.947462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.947625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.947651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.947811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.947988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.948013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.948206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.948375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.948400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.948598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.948761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.948786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.948984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.949214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.949240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.949440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.949637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.949662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.949855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.950029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.950055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.950250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.950527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.950551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.950724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.950923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.950954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.951173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.951372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.951396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.951600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.951798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.951822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.952002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.952228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.952253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.952481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.952649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.952674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.952870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.953077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.953102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.953275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.953445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.953469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.953652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.953848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.953872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.954041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.954246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.954271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.954443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.954640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.954667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.954836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.955064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.955090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.955263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.955457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.955480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.955647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.955844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.955869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.956065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.956263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.956288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.956485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.956681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.956706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.956868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.957060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.957085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.957259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.957459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.957483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.957692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.957902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.957937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.958134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.958326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.157 [2024-05-15 04:26:05.958352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.157 qpair failed and we were unable to recover it. 00:25:18.157 [2024-05-15 04:26:05.958542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.958725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.958750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.958980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.959173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.959202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.959406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.959623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.959647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.959838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.960040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.960066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.960259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.960479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.960504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.960778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.961028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.961054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.961249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.961522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.961547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.961762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.961954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.961979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.962171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.962396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.962421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.962615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.962836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.962860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.963053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.963248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.963273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.963448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.963645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.963672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.963901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.964098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.964123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.964320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.964490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.964515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.964733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.964956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.964982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.965181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.965381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.965406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.965606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.965772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.965799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.966013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.966215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.966241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.966529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.966792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.966814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.967059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.967253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.967277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.967452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.967674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.967699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.967876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.968055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.968080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.968285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.968547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.968572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.968766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.968964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.968990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.969161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.969398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.969422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.969600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.969773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.969798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.969993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.970187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.970212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.970416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.970601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.970625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.970926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.971101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.971129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.971362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.971593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.971618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.971806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.972010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.972035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.972233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.972468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.972493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.972718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.972889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.972913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.973119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.973319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.973344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.158 qpair failed and we were unable to recover it. 00:25:18.158 [2024-05-15 04:26:05.973567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.973753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.158 [2024-05-15 04:26:05.973777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.974023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.974199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.974224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.974410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.974638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.974663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.974894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.975073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.975100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.975277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.975477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.975501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.975681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.975846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.975872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.976122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.976353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.976378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.976598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.976792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.976817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.977010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.977215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.977245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.977445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.977668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.977694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.977882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.978093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.978118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.978290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.978516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.978540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.978828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.979097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.979123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.979314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.979487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.979511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.979681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.979881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.979907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.980113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.980318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.980343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.980545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.980774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.980799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.981002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.981169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.981193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.981392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.981613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.981641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.981874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.982040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.982065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.982239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.982436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.982460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.982657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.982827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.982851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.983050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.983222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.983248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.983413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.983613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.983640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.983831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.984032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.984058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.984233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.984452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.984477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.984651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.984847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.984873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.985092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.985268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.985293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.985478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.985678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.985702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.985905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.986141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.986166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.986348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.986567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.986592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.986761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.986936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.986963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.987162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.987424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.987448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.987615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.987777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.987802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.988030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.988242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.159 [2024-05-15 04:26:05.988267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.159 qpair failed and we were unable to recover it. 00:25:18.159 [2024-05-15 04:26:05.988467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.988634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.988659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.988884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.989093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.989119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.989321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.989517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.989542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.989707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.989904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.989942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.990153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.990335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.990360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.990555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.990761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.990786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.990974] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:25:18.160 [2024-05-15 04:26:05.990996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.991050] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.160 [2024-05-15 04:26:05.991199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.991228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.991468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.991662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.991687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.991856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.992012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.992037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.992242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.992460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.992485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.992681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.992856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.992882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.993106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.993329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.993354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.993542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.993738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.993762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.993965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.994159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.994184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.994357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.994565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.994591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.994787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.994981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.995007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.995207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.995401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.995427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.995682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.995906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.995938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.996140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.996364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.996390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.996553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.996742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.996767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.996990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.997164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.997188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.997421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.997614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.997639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.997856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.998078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.998103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.998300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.998479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.998504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.998705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.998882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.998908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.999089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.999284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.999308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.999509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.999700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:05.999726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:05.999916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.000124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.000150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:06.000340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.000549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.000574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:06.000775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.000961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.000987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:06.001188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.001357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.001381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:06.001592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.001763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.001787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:06.001965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.002159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.002185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:06.002381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.002574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.160 [2024-05-15 04:26:06.002603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.160 qpair failed and we were unable to recover it. 00:25:18.160 [2024-05-15 04:26:06.002795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.002988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.003013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.003213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.003411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.003437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.003628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.003822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.003847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.004041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.004212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.004237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.004411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.004611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.004637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.004834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.005036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.005061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.005232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.005416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.005441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.005635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.005799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.005826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.006018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.006189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.006214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.006419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.006576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.006601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.006800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.007026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.007052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.007248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.007469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.007494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.007691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.007900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.007924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.008108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.008302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.008329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.008520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.008716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.008741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.008947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.009170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.009195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.009399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.009594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.009620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.009819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.010036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.010062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.010226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.010396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.010420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.010644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.010865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.010890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.011074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.011274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.011298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.011490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.011682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.011706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.011933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.012102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.012126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.012323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.012493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.012517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.012712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.012948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.012973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.013171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.013353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.013377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.013571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.013760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.013785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.014007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.014239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.014263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.014463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.014660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.014684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.161 qpair failed and we were unable to recover it. 00:25:18.161 [2024-05-15 04:26:06.014855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.161 [2024-05-15 04:26:06.015068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.015094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.015292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.015465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.015490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.015659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.015852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.015877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.016086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.016253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.016277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.016445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.016640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.016665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.016857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.017037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.017062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.017228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.017424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.017448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.017640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.017839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.017864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.018064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.018287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.018312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.018484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.018681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.018706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.018933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.019133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.019158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.019335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.019526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.019551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.019740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.019961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.019990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.020189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.020378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.020403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.020593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.020759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.020785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.020956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.021159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.021183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.021384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.021556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.021581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.021774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.021995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.022020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.022215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.022477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.022501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.022701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.022902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.022927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.023111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.023305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.023330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.023505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.023673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.023702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.023902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.024099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.024124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.024346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.024542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.024566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.024852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.025037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.025061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.025262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.025452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.025477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.025640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.025860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.025884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.026084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.026282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.026306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.026526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.026720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.026745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.026948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.027149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.027173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.027367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.027562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.027586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.027784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.027975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.028000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.028179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.028394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.028418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.028651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.028841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.028865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.029063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.029286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.029310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.029504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.029699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.029723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.162 [2024-05-15 04:26:06.029919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.030094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.162 [2024-05-15 04:26:06.030118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.162 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.030345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.030554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.030577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.030779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.030949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.030974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.031170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.031341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.031365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.031595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.031757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.031781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.031982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.032184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.032209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.163 [2024-05-15 04:26:06.032452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.032616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.032641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.032865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.033088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.033113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.033313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.033501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.033525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.033727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.033926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.033955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.034155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.034330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.034355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.034551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.034717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.034741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.034928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.035092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.035116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.035283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.035501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.035525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.035720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.035881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.035905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.036103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.036278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.036302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.036506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.036705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.036731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.036950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.037121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.037146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.037344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.037541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.037565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.037741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.037961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.037986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.038152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.038381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.038406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.038606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.038829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.038854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.039033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.039233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.039258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.039454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.039616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.039640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.039807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.040030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.040055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.040254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.040469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.040494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.040724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.040917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.040948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.041143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.041318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.041344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.041534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.041731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.041755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.041950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.042149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.042174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.042374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.042591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.042616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.042786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.042988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.043013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.043178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.043352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.043376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.043541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.043716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.043742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.043913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.044085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.044111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.044323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.044523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.044548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.044764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.044947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.044972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.163 [2024-05-15 04:26:06.045203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.045405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.163 [2024-05-15 04:26:06.045429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.163 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.045628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.045816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.045841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.046041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.046212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.046238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.046415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.046613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.046638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.046805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.046999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.047025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.047198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.047370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.047394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.047565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.047764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.047790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.047991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.048187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.048212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.048401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.048597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.048621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.048816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.048987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.049017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.049205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.049413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.049437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.049608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.049799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.049823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.050026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.050200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.050235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.050401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.050625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.050650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.050824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.051033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.051058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.051248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.051478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.051502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.051688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.051857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.051881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.052050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.052244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.052269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.052437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.052605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.052630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.052824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.053030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.053055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.053255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.053451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.053477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.053677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.053880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.053905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.054142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.054341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.054366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.054558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.054725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.054750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.054965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.055171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.055195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.055394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.055564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.055603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.055814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.056046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.056071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.056240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.056441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.056466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.056636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.056861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.056886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.057083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.057259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.057284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.057485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.057692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.057717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.057913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.058088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.058113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.058335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.058532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.058557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.058732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.058900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.058947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.164 qpair failed and we were unable to recover it. 00:25:18.164 [2024-05-15 04:26:06.059157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.059325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.164 [2024-05-15 04:26:06.059349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.059520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.059711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.059735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.059934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.060130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.060154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.060323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.060530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.060555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.060739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.060926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.060963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.061132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.061302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.061327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.061502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.061733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.061758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.061955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.062174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.062199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.062368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.062559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.062583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.062780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.062949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.062973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.063176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.063374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.063399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.063573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.063744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.063768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.063974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.064171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.064197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.064371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.064592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.064616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.064815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.064979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.065004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.065170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.065330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.065355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.065549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.065725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.065749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.065950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.066140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.066164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.066358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.066544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.066569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.066734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.066905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.066935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.067116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.067270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.067295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.067463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.067663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.067688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.067856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.068035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.068060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.068294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.068462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.068488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.068658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.068854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.068879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.069050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.069222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.069246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.069440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.069612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.069641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.069848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.070062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.070088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.070310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.070475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.070499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.070718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.070910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.070939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.071165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.071328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.071352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.071516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.071701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.071726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.071949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.072147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.072172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.072328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.072428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:18.165 [2024-05-15 04:26:06.072489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.072527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.072760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.072923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.072953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.073120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.073311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.073336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.165 qpair failed and we were unable to recover it. 00:25:18.165 [2024-05-15 04:26:06.073499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.165 [2024-05-15 04:26:06.073689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.073717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.073921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.074122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.074147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.074307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.074502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.074527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.074720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.074915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.074946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.075173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.075361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.075386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.075587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.075779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.075803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.076003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.076198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.076223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.076387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.076576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.076600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.076773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.076973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.076998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.077169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.077369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.077396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.077588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.077786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.077815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.077996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.078229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.078254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.078567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.078801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.078827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.079009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.079185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.079210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.079375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.079571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.079596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.079791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.079992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.080019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.080216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.080409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.080433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.080632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.080807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.080834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.081031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.081224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.081249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.081442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.081717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.081741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.082004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.082177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.082202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.082432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.082629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.082653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.082820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.083018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.083045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.083223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.083445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.083471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.083637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.083830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.083855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.084032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.084228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.084253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.084553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.084829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.084854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.085082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.085280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.085305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.085477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.085639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.085664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.085832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.086021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.086047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.086259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.086456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.086481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.086681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.086874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.086899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.087098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.087267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.087291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.087495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.087690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.087714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.087918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.088149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.088175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.088404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.088567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.088591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.088783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.088983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.089008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.166 qpair failed and we were unable to recover it. 00:25:18.166 [2024-05-15 04:26:06.089174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.166 [2024-05-15 04:26:06.089388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.089413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.089647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.089821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.089845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.090042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.090211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.090238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.090404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.090605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.090630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.090822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.091034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.091060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.091230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.091415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.091442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.091669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.091843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.091868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.092046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.092248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.092272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.092464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.092636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.092661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.092824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.093019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.093044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.093213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.093402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.093427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.093605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.093799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.093826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.094003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.094215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.094241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.094403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.094605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.094629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.094800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.094967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.094994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.095193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.095394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.095419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.095575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.095764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.095789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.095956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.096157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.096183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.096424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.096620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.096644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.096841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.097032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.097058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.097284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.097457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.097482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.097679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.097870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.097895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.098070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.098250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.098274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.098467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.098691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.098716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.098916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.099084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.099113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.099303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.099494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.099520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.099742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.099909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.099938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.100133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.100327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.100352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.100546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.100710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.100735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.100908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.101112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.101137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.101309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.101524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.101549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.101784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.101972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.101997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.102196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.102390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.102414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.102583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.102811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.102836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.103063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.103236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.103261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.103467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.103639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.103663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.103858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.104098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.104124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.104344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.104542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.104567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.104763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.104965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.104990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.167 qpair failed and we were unable to recover it. 00:25:18.167 [2024-05-15 04:26:06.105181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.167 [2024-05-15 04:26:06.105387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.105411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.105609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.105810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.105834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.106000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.106177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.106202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.106372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.106571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.106596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.106820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.106989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.107014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.107215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.107415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.107440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.107612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.107781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.107807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.107999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.108192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.108217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.108391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.108587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.108613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.108786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.108984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.109010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.109206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.109432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.109457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.109650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.109822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.109848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.110023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.110199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.110224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.110393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.110561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.110586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.110783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.111002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.111027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.111231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.111402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.111428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.111659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.111828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.111853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.112024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.112194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.112218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.112445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.112638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.112662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.112861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.113050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.113075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.113278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.113475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.113500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.113695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.113891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.113915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.114153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.114321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.114345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.114578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.114796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.114821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.114988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.115163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.115188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.115363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.115523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.115548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.115773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.115953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.115979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.116207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.116370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.116395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.116591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.116787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.116812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.116977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.117176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.117200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.117433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.117623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.117650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.117811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.118009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.118034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.118279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.118481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.118505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.118675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.118905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.118942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.119126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.119327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.119352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.168 qpair failed and we were unable to recover it. 00:25:18.168 [2024-05-15 04:26:06.119531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.168 [2024-05-15 04:26:06.119721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.119746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.119981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.120191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.120220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.120401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.120599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.120624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.120815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.120985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.121010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.121181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.121385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.121410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.121616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.121835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.121859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.122045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.122217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.122248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.122450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.122670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.122694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.122895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.123106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.123131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.123329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.123523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.123548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.123713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.123910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.123948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.124155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.124331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.124357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.124569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.124794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.124818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.125028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.125229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.125254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.125445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.125608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.125633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.125804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.126026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.126052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.126226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.126432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.126457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.126691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.126924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.126953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.127178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.127381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.127405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.127596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.127785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.127809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.128089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.128290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.128315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.128505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.128690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.128714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.128890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.129082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.129107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.129331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.129526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.129550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.129721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.129917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.129951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.130143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.130345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.130368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.130577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.130776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.130800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.130997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.131193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.131228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.131423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.131612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.131637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.131831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.132001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.132027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.132204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.132382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.132406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.132601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.132817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.132842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.133155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.133362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.133387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.133606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.133779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.133804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.134008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.134175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.134199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.134392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.134567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.134593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.134789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.135009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.135034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.135205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.135405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.135430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.135653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.135825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.135851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.136075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.136244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.136269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.136490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.136688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.136717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.136906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.137134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.137161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.169 qpair failed and we were unable to recover it. 00:25:18.169 [2024-05-15 04:26:06.137409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.169 [2024-05-15 04:26:06.137579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.137605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.137798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.138027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.138053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.138230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.138402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.138428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.138628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.138820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.138847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.139081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.139258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.139283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.139486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.139662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.139687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.140016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.140214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.140240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.140440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.140662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.140689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.170 [2024-05-15 04:26:06.140875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.141075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.170 [2024-05-15 04:26:06.141101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.170 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.141281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.141478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.141503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.141705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.141957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.141999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.142194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.142414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.142451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.142655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.142900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.142943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.143152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.143380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.143409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.143619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.143817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.143844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.144081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.144281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.144306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.144513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.144681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.144706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.144900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.145086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.145111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.442 qpair failed and we were unable to recover it. 00:25:18.442 [2024-05-15 04:26:06.145290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.442 [2024-05-15 04:26:06.145487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.145513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.145725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.145934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.145959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.146154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.146345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.146375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.146579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.146801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.146825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.147044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.147282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.147307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.147509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.147698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.147723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.147892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.148100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.148125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.148369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.148566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.148590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.148792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.148998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.149024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.149202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.149421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.149445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.149644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.149873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.149898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.150117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.150330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.150354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.150554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.150789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.150814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.151054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.151256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.151281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.151506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.151703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.151727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.151938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.152128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.152153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.152358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.152525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.152550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.152746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.152957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.152983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.153186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.153409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.153434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.153594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.153825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.153850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.154035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.154208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.154233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.154455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.154631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.154656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.154823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.155021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.155065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.155243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.155477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.155502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.155702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.155892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.155921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.156133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.156341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.156365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.156540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.156734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.156759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.156988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.157190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.157215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.157445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.157639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.157664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.443 [2024-05-15 04:26:06.157824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.157990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.443 [2024-05-15 04:26:06.158015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.443 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.158213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.158405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.158430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.158616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.158848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.158873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.159111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.159318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.159343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.159568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.159767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.159791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.160026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.160222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.160246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.160417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.160589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.160613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.160801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.160993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.161018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.161214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.161380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.161405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.161569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.161733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.161758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.161980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.162174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.162200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.162423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.162589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.162613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.162809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.162975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.163001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.163172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.163355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.163380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.163579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.163748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.163777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.163980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.164155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.164182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.164346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.164543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.164567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.164786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.164967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.164993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.165165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.165371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.165398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.165617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.165779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.165803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.166021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.166214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.166239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.166472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.166642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.166666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.166861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.167034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.167059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.167283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.167483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.167508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.167704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.167874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.167904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.168113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.168318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.168343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.168535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.168717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.168742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.168936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.169157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.169181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.169376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.169542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.169567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.169765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.169971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.169998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.444 qpair failed and we were unable to recover it. 00:25:18.444 [2024-05-15 04:26:06.170191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.444 [2024-05-15 04:26:06.170382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.170407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.170581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.170772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.170797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.170994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.171219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.171243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.171404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.171607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.171631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.171830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.171998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.172022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.172194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.172403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.172426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.172639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.172827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.172852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.173050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.173225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.173249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.173440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.173660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.173685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.173881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.174069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.174094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.174326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.174530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.174554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.174749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.174947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.174972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.175183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.175391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.175415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.175640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.175829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.175853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.176048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.176211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.176235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.176433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.176621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.176645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.176866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.177062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.177087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.177281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.177523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.177563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.177741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.177938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.177963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.178129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.178302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.178325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.178539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.178735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.178759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.178947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.179161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.179185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.179407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.179601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.179627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.179850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.180033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.180058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.180228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.180412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.180436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.180672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.180878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.180903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.181117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.181323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.181347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.181518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.181710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.181735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.181940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.182137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.182161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.182358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.182640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.182664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.445 qpair failed and we were unable to recover it. 00:25:18.445 [2024-05-15 04:26:06.182864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.445 [2024-05-15 04:26:06.183094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.183120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.183289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.183485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.183509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.183680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.183896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.183920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.184149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.184322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.184346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.184544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.184707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.184731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.184943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.185142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.185167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.185373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.185608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.185632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.185858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.186023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.186048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.186244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.186409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.186441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.186684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.186855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.186880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.187125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.187298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.187325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.187493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.187717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.187742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.187969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.188174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.188199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.188406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.188569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.188593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.188760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.188994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.189019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.189220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.189384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.189413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.189611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.189828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.189853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.190028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.190198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.190222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.190423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.190587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.190611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.190813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.191020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.191046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.191232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.191409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.191436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.191662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.191859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.191884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.192092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.192285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.192310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.192509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.192831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.192856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.193026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.193239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.193263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.193437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.193601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.193627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.193795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.193958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.193990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.446 [2024-05-15 04:26:06.194199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.194277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.446 [2024-05-15 04:26:06.194313] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.446 [2024-05-15 04:26:06.194327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.446 [2024-05-15 04:26:06.194340] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.446 [2024-05-15 04:26:06.194350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.446 [2024-05-15 04:26:06.194373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.446 [2024-05-15 04:26:06.194396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.446 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.194436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:18.447 [2024-05-15 04:26:06.194488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:18.447 [2024-05-15 04:26:06.194595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.194515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:18.447 [2024-05-15 04:26:06.194518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:18.447 [2024-05-15 04:26:06.194768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.194791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.194967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.195188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.195213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.195374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.195546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.195571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.195768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.195937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.195964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.196167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.196347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.196372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.196539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.196750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.196779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.196975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.197158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.197182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.197368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.197541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.197567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.197763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.197954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.197979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.198210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.198387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.198411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.198604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.198807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.198832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.199026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.199222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.199247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.199437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.199639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.199664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.199870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.200067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.200093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.200265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.200443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.200468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.200684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.200883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.200908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.201133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.201312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.201339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.201517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.201710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.201734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.201908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.202116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.202141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.202351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.202553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.202578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.202761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.202957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.202988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.203179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.203377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.203402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.203568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.203767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.203792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.447 qpair failed and we were unable to recover it. 00:25:18.447 [2024-05-15 04:26:06.203991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.204186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.447 [2024-05-15 04:26:06.204210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.204414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.204611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.204636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.204831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.205105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.205130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.205316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.205515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.205543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.205759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.206086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.206111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.206306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.206497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.206521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.206702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.206906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.206937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.207119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.207307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.207332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.207535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.207709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.207735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.207946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.208129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.208154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.208329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.208527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.208551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.208712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.208903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.208927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.209105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.209270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.209295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.209513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.209714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.209743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.209945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.210165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.210191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.210364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.210560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.210585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.210755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.210969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.210996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.211170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.211344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.211369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.211564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.211744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.211769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.211984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.212179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.212205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.212409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.212581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.212606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.212803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.213000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.213027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.213233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.213425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.213450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.213649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.213833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.213858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.214036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.214209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.214234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.214403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.214601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.214626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.214825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.214999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.215026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.215217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.215378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.215403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.215599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.215763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.215787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.215990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.216193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.448 [2024-05-15 04:26:06.216218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.448 qpair failed and we were unable to recover it. 00:25:18.448 [2024-05-15 04:26:06.216412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.216971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.216998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.217228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.217393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.217418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.217614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.217807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.217832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.218161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.218362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.218388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.218562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.218737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.218762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.218960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.219139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.219165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.219333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.219510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.219535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.219700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.219857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.219881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.220089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.220255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.220281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.220612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.220839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.220864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.221057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.221258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.221283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.221490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.221654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.221680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.221852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.222030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.222058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.222248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.222423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.222448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.222648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.222813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.222838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.223066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.223269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.223294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.223463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.223628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.223653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.223834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.224123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.224149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.224320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.224518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.224544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.224713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.224896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.224921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.225111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.225304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.225328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.225498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.225722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.225747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.225911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.226115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.226140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.226317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.226521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.226546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.226877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.227051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.227077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.227256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.227459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.227483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.227653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.227846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.227871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.228148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.228515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.228539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.228721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.228912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.228955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.449 [2024-05-15 04:26:06.229200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.229373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.449 [2024-05-15 04:26:06.229399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.449 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.229586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.229778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.229802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.230004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.230344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.230369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.230571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.230751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.230777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.230956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.231138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.231163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.231346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.231546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.231571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.231743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.231938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.231963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.232143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.232314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.232339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.232548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.232713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.232738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.232938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.233136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.233161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.233327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.233501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.233526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.233722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.233885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.233910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.234112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.234280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.234307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.234475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.234662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.234687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.234903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.235100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.235130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.235315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.235505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.235530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.235700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.236041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.236067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.236233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.236406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.236434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.236636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.236831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.236856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.237050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.237220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.237246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.237408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.237594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.237619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.237812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.238004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.238030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.238251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.238528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.238553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.238718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.238891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.238919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.239114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.239300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.239330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.239499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.239684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.239709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.240027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.240227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.240253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.240445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.240620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.240645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.240803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.241003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.241030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.241206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.241384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.450 [2024-05-15 04:26:06.241411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.450 qpair failed and we were unable to recover it. 00:25:18.450 [2024-05-15 04:26:06.241613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.241812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.241837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.242034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.242237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.242263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.242459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.242633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.242658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.242979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.243193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.243219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.243419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.243600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.243630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.243808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.243985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.244011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.244178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.244372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.244397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.244586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.244780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.244805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.245014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.245188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.245213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.245394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.245584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.245609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.245778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.245959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.245986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.246164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.246360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.246385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.246584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.246752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.246777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.246951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.247151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.247176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.247355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.247555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.247585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.247786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.247984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.248010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.248189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.248352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.248377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.248549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.248764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.248788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.248958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.249129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.249154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.249352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.249552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.249577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.249766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.249963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.249989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.250175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.250340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.250365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.250541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.250711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.250736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.250944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.251152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.251178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.251401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.251591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.251616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.451 qpair failed and we were unable to recover it. 00:25:18.451 [2024-05-15 04:26:06.251819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.251992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.451 [2024-05-15 04:26:06.252018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.252200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.252370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.252399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.252613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.252810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.252835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.253030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.253218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.253243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.253407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.253633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.253658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.253845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.254064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.254089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.254257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.254452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.254476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.254648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.254818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.254844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.255017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.255185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.255209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.255385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.255554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.255581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.255783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.255954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.255980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.256149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.256315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.256340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.256543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.256712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.256737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.256913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.257117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.257143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.257336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.257524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.257549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.257723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.257973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.257999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.258159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.258326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.258351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.258521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.258714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.258739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.258902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.259106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.259133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.259308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.259508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.259532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.259744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.259915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.259949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.260147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.260344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.260369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.260569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.260784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.260809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.261119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.261347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.261372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.261550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.261727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.261752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.261928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.262107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.262133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.262335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.262521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.262546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.262733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.262916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.262948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.263118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.263294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.263320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.263493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.263666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.452 [2024-05-15 04:26:06.263691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.452 qpair failed and we were unable to recover it. 00:25:18.452 [2024-05-15 04:26:06.263894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.264118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.264145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.264349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.264548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.264573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.264738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.264963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.264989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.265169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.265368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.265394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.265580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.265749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.265776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.265966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.266138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.266163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.266328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.266496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.266520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.266707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.266906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.266937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.267109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.267308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.267333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.267527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.267740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.267765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.267942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.268114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.268139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.268379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.268711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.268735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.268935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.269156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.269182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.269366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.269555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.269580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.269749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.269943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.269970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.270252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.270457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.270484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.270663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.270844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.270869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.271032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.271194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.271219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.271383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.271547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.271572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.271867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.272049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.272074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.272253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.272451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.272476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.272703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.272871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.272897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.273109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.273287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.273313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.273504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.273669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.273694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.273885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.274092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.274118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.274322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.274542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.274567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.274795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.274993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.275019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.275198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.275371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.275396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.275572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.275769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.275794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.453 [2024-05-15 04:26:06.275990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.276163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.453 [2024-05-15 04:26:06.276188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.453 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.276391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.276654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.276679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.276880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.277075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.277100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.277280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.277470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.277495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.277720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.277893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.277917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.278120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.278301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.278326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.278527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.278722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.278746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.278919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.279263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.279288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.279513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.279680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.279705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.279903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.280115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.280142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.280313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.280489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.280514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.280721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.280894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.280921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.281146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.281311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.281338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.281527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.281693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.281719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.281915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.282117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.282143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.282380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.282569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.282594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.282783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.282999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.283025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.283192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.283414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.283439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.283637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.283799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.283823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.283989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.284154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.284181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.284378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.284589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.284614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.284820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.284986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.285013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.285218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.285393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.285417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.285577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.285773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.285799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.285967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.286157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.286182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.286437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.286638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.286663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.286863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.287046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.287072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.287274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.287446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.287471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.287663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.287834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.287858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.288025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.288221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.288248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.454 qpair failed and we were unable to recover it. 00:25:18.454 [2024-05-15 04:26:06.288423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.454 [2024-05-15 04:26:06.288617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.288641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.288814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.288995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.289021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.289196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.289399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.289423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.289645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.289841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.289865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.290056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.290259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.290284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.290460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.290632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.290658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.290853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.291073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.291099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.291274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.291463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.291488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.291711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.291906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.291937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.292110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.292315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.292340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.292512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.292718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.292742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.292978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.293172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.293197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.293369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.293563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.293588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.293799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.293985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.294010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.294176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.294344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.294368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.294588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.294783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.294808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.294981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.295150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.295175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.295344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.295516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.295541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.295708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.295908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.295938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.296114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.296339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.296364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.296603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.296778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.296803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.296987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.297165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.297190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.297386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.297545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.297570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.297740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.297916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.297948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.298117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.298271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.298296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.298486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.298705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.298730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.298941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.299136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.299161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.299354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.299514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.299539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.455 qpair failed and we were unable to recover it. 00:25:18.455 [2024-05-15 04:26:06.299710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.299888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.455 [2024-05-15 04:26:06.299913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.300098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.300297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.300322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.300523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.300710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.300734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.300901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.301085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.301115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.301302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.301497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.301522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.301710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.301896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.301922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.302125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.302311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.302336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.302530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.302745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.302771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.302937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.303103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.303129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.303293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.303483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.303508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.303710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.303892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.303919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.304095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.304265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.304291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.304481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.304681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.304706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.304870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.305045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.305076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.305271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.305442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.305468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.305640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.305865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.305891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.306068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.306259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.306284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.306483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.306645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.306670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.306836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.306997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.307022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.307186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.307353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.307377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.307579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.307749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.307775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.307956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.308146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.308171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.308340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.308538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.308564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.308766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.308951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.308983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.309144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.309316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.309341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.309540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.309734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.309760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.309945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.310139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.310164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.456 [2024-05-15 04:26:06.310371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.310571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.456 [2024-05-15 04:26:06.310596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.456 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.310792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.310978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.311004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.311176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.311351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.311376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.311569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.311787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.311811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.311982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.312157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.312182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.312404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.312573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.312598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.312764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.312952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.312982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.313153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.313354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.313379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.313578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.313754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.313779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.313979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.314158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.314183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.314377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.314546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.314571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.314765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.314927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.314957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.315134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.315302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.315327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.315506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.315694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.315719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.315880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.316046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.316072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.316268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.316461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.316486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.316694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.316915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.316948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.317158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.317322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.317348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.317524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.317685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.317710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.317901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.318113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.318139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.318329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.318485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.318510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.318727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.318912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.318943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.319110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.319279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.319306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.319477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.319673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.319698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.319854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.320048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.320074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.320239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.320401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.320427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.320590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.320755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.320781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.320978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.321196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.321221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.321390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.321581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.321605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.457 [2024-05-15 04:26:06.321766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.321937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.457 [2024-05-15 04:26:06.321963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.457 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.322135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.322359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.322385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.322578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.322769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.322794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.322962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.323133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.323159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.323354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.323526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.323551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.323745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.323917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.323948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.324144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.324317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.324342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.324512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.324687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.324712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.324883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.325078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.325103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.325308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.325464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.325489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.325687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.325886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.325911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.326124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.326292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.326317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.326508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.326700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.326725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.326924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.327092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.327116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.327317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.327474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.327499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.327701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.327873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.327898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.328100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.328262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.328287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.328476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.328674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.328699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.328897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.329071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.329098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.329289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.329463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.329488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.329702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.329872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.329898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.330087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.330251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.330276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.330473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.330637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.330662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.330827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.330989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.331016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.331177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.331368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.331393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.331560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.331727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.331752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.331950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.332144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.332169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.332365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.332556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.332581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.332754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.332912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.332951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.333117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.333311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.333335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.458 qpair failed and we were unable to recover it. 00:25:18.458 [2024-05-15 04:26:06.333501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.458 [2024-05-15 04:26:06.333696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.333723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.333917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.334142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.334167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.334341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.334556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.334581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.334772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.334966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.334992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.335155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.335358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.335384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.335585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.335779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.335804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.335992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.336180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.336205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.336396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.336558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.336583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.336756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.336955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.336981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.337147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.337312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.337339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.337532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.337722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.337748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.337918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.338108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.338133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.338356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.338524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.338550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.338719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.338878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.338903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.339084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.339275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.339300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.339473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.339672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.339697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.339914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.340083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.340109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.340296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.340490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.340515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.340684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.340884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.340909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.341090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.341307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.341333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.341503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.341699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.341724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.341913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.342086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.342111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.342276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.342441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.342466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.342660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.342857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.342882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.343085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.343252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.343277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.343444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.343609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.343634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.343808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.343977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.344004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.344211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.344401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.344426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.344647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.344827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.344855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.345041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.345242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.345267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.459 qpair failed and we were unable to recover it. 00:25:18.459 [2024-05-15 04:26:06.345443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.459 [2024-05-15 04:26:06.345622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.345649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.345843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.346021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.346048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.346213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.346391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.346417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.346582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.346769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.346794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.346983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.347170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.347196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.347396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.347570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.347596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.347815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.347991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.348017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.348182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.348420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.348444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.348645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.348815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.348839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.349024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.349217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.349246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.349467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.349636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.349660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.349856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.350023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.350050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.350220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.350438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.350463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.350653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.350817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.350842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.351060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.351243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.351268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.351487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.351677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.351702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.351878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.352068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.352096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.352295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.352459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.352484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.352677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.352844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.352873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.353045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.353238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.353262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.460 qpair failed and we were unable to recover it. 00:25:18.460 [2024-05-15 04:26:06.353482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.353652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.460 [2024-05-15 04:26:06.353681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.353878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.354081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.354106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.354267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.354429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.354454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.354653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.354818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.354845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.355074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.355241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.355266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.355436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.355642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.355666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.355871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.356045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.356071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.356265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.356497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.356524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.356730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.356926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.356958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.357146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.357341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.357366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.357524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.357692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.357716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.357908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.358098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.358124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.358292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.358461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.358488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.358681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.358855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.358880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.359052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.359237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.359261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.359428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.359616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.359640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.359843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.360013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.360039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.360236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.360430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.360455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.360642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.360809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.360834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.361057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.361251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.361277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.361454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.361615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.361642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.361818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.361986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.362011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.362185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.362376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.362401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.362568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.362735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.362762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.362937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.363134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.363159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.363355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.363573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.363598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.363789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.363989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.364018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.364195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.364356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.364382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.364577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.364791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.364816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.365042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.365213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.365238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.461 qpair failed and we were unable to recover it. 00:25:18.461 [2024-05-15 04:26:06.365429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.461 [2024-05-15 04:26:06.365589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.365613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.365803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.365971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.365996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.366163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.366322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.366347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.366507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.366692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.366716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.366881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.367056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.367083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.367301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.367470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.367495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.367663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.367850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.367874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.368053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.368219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.368245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.368417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.368618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.368644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.368837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.369033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.369059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.369223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.369381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.369406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.369610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.369783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.369807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.369985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.370150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.370177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.370384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.370567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.370592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.370755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.370947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.370973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.371147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.371325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.371350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.371551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.371742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.371766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.371960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.372193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.372217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.372410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.372573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.372598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.372797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.372986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.373016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.373211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.373434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.373458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.373650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.373859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.373886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.374063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.374237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.374261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.374440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.374611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.374637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.374828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.375046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.375072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.375247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.375415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.375440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.375653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.375880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.375904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.376087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.376264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.376291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.376452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.376611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.376635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.376861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.377055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.377080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.462 qpair failed and we were unable to recover it. 00:25:18.462 [2024-05-15 04:26:06.377281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.462 [2024-05-15 04:26:06.377467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.377491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.377687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.377875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.377899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.378101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.378275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.378299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.378495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.378654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.378678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.378866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.379060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.379086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.379286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.379483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.379508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.379671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.379829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.379854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.380021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.380222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.380247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.380433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.380651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.380675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.380872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.381040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.381065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.381229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.381417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.381441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.381620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.381783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.381811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.382008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.382176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.382202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.382396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.382569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.382594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.382794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.382992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.383017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.383197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.383398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.383423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.383594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.383753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.383778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.384009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.384184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.384209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.384409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.384604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.384628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.384802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.385018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.385043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.385244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.385410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.385435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.385598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.385787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.385812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.385973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.386141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.386166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.386367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.386521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.386545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.386743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.386914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.386959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.387142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.387341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.387365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.387566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.387794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.387820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.388007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.388204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.388229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.388394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.388585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.388610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.388802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.388965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.463 [2024-05-15 04:26:06.388996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.463 qpair failed and we were unable to recover it. 00:25:18.463 [2024-05-15 04:26:06.389170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.389379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.389403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.389595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.389768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.389795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.389976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.390169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.390194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.390399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.390560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.390584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.390750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.390958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.390983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.391170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.391342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.391367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.391573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.391751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.391776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.391980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.392165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.392190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.392366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.392590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.392614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.392806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.392980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.393004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.393194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.393393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.393422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.393595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.393760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.393784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.393954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.394140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.394166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.394343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.394518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.394544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.394736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.394909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.394942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.395114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.395287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.395312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.395502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.395700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.395725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.395895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.396120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.396146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.396343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.396515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.396540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.396711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.396908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.396939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.397108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.397295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.397324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.397482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.397650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.397674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.397867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.398066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.398092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.398263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.398426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.398451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.398642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.398803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.398828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.399022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.399196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.399222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.399389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.399558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.399583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.399783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.399981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.400006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.400204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.400401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.464 [2024-05-15 04:26:06.400426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.464 qpair failed and we were unable to recover it. 00:25:18.464 [2024-05-15 04:26:06.400593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.400809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.400834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.401034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.401202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.401227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.401390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.401587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.401611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.401770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.401972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.401999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.402181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.402372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.402397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.402568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.402726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.402750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.402950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.403141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.403166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.403356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.403546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.403572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.403736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.403920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.403952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.404136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.404337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.404361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.404544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.404705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.404730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.404895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.405095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.405123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.405290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.405489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.405515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.405687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.405847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.405872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.406068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.406228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.406253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.406424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.406616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.406641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.406859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.407031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.407056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.407221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.407386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.407410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.407603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.407798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.407823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.407996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.408160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.408185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.408354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.408515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.408541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.408764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.408959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.408984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.409151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.409340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.409365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.409535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.409698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.409723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.409884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.410075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.410101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.465 qpair failed and we were unable to recover it. 00:25:18.465 [2024-05-15 04:26:06.410273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.465 [2024-05-15 04:26:06.410472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.410497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.410673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.410862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.410887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.411086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.411277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.411303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.411465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.411629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.411653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.411814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.412001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.412027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.412200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.412366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.412390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.412611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.412784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.412809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.413009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.413175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.413200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.413386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.413601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.413628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.413784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.414003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.414028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.414229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.414414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.414439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.414633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.414804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.414829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.415006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.415163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.415189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.415375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.415568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.415592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.415772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.415942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.415968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.416192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.416357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.416383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.416586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.416777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.416801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.417024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.417184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.417212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.417380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.417546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.417572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.417753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.417971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.417997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.418184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.418372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.418396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.418557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.418745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.418769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.418955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.419123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.419148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.419343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.419507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.419533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.419714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.419912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.419943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.420142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.420304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.420328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.420493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.420680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.420707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.420900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.421121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.421146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.421326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.421482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.421507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.421697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.421853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.421878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.466 qpair failed and we were unable to recover it. 00:25:18.466 [2024-05-15 04:26:06.422058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.466 [2024-05-15 04:26:06.422262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.422287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.422472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.422689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.422713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.422877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.423066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.423091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.423311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.423486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.423512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.423683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.423853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.423879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.424079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.424247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.424272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.424435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.424621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.424645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.424837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.425014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.425039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.425228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.425388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.425412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.425587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.425752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.425779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.425994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.426180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.426205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.426410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.426600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.426624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.426813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.426988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.427013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.427199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.427353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.427380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.427601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.427769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.427794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.427958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.428130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.428155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.428339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.428555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.428580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.428750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.428942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.428966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.429163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.429332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.429355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.429545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.429709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.429733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.429914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.430117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.430145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.430302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.430465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.430489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.430713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.430902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.430927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.431146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.431330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.431354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.431519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.431693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.431722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.431922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.432115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.432140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.432312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.432479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.432504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.432703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.432899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.432925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.433118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.433289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.433317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.467 [2024-05-15 04:26:06.433477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.433676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.467 [2024-05-15 04:26:06.433701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.467 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.433892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.434080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.434106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.434290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.434516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.434541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.434712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.434870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.434895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.435065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.435236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.435261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.435442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.435605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.435630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.435803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.436002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.436028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.436194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.436370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.436394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.436556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.436757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.436781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.436977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.437145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.437175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.437377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.437569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.437593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.437788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.437990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.438016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.438203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.438398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.438423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.438592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.438763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.438788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.438951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.439145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.439170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.439359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.439572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.439599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.439765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.439938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.439965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.440154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.440333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.440366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.440594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.440759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.440784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.440987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.441186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.441211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.441379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.441567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.441594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.441792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.441985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.442011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.442205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.442396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.442422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.442642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.442812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.442836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.443009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.443211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.443237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.468 [2024-05-15 04:26:06.443440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.443611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.468 [2024-05-15 04:26:06.443639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.468 qpair failed and we were unable to recover it. 00:25:18.740 [2024-05-15 04:26:06.443867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.444060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.444086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.740 qpair failed and we were unable to recover it. 00:25:18.740 [2024-05-15 04:26:06.444284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.444489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.444523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.740 qpair failed and we were unable to recover it. 00:25:18.740 [2024-05-15 04:26:06.444740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.444952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.444993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.740 qpair failed and we were unable to recover it. 00:25:18.740 [2024-05-15 04:26:06.445217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.445428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.445460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.740 qpair failed and we were unable to recover it. 00:25:18.740 [2024-05-15 04:26:06.445681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.445913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.445954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.740 qpair failed and we were unable to recover it. 00:25:18.740 [2024-05-15 04:26:06.446159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.740 [2024-05-15 04:26:06.446323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.446349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.446507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.446723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.446748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.446920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.447131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.447156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.447331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.447500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.447526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.447713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.447907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.447937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.448105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.448285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.448309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.448501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.448661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.448686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.448878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.449074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.449100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.449301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.449487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.449512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.449698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.449862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.449887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.450084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.450255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.450280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.450508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.450674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.450698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.450891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.451090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.451116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.451297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.451463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.451488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.451696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.451855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.451879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.452051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.452211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.452235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.452462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.452625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.452650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.452839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.453018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.453044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.453220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.453390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.453414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.453606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.453825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.453849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.454013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.454172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.454197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.454389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.454557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.454582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.454744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.454912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.454941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.455113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.455300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.455324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.455523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.455716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.455741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.455909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.456084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.456108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.456327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.456497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.456522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.456696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.456872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.456897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.457086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.457253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.741 [2024-05-15 04:26:06.457278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.741 qpair failed and we were unable to recover it. 00:25:18.741 [2024-05-15 04:26:06.457439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.457601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.457630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.457822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.457993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.458021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.458191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.458368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.458393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.458559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.458727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.458753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.458945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.459116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.459141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.459332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.459495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.459519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.459720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.459897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.459921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.460097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.460298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.460324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.460512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.460699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.460724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.460939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.461140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.461165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.461329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.461517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.461546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.461738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.461941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.461967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.462163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.462336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.462361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.462576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.462733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.462758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.462920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.463093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.463118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.463317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.463512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.463539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.463703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.463893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.463918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.464106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.464291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.464318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.464509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.464666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.464691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.464861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.465056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.465081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.465275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.465490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.465515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.465708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.465900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.465925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.466135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.466334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.466358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.466525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.466729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.466753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.466919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.467118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.467143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.467309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.467503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.467529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.467718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.467937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.742 [2024-05-15 04:26:06.467962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.742 qpair failed and we were unable to recover it. 00:25:18.742 [2024-05-15 04:26:06.468131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.468300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.468326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.468491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.468681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.468706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.468921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.469128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.469153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.469353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.469513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.469537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.469699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.469866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.469891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.470065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.470231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.470256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.470445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.470609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.470635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.470814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.470992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.471018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.471217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.471379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.471404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.471595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.471762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.471787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.471983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.472151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.472175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.472367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.472538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.472563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.472753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.472952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.472977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.473162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.473332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.473356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.473533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.473734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.473759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.473997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.474217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.474242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.474432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.474595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.474618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.474815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.475006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.475034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.475216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.475384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.475408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.475572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.475762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.475788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.475963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.476158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.476183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.476344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.476559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.476584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.476769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.476940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.476966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.477162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.477354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.477379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.477540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.477706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.477730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.477933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.478139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.478164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.478329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.478523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.478548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.478767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.478936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.478961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.743 qpair failed and we were unable to recover it. 00:25:18.743 [2024-05-15 04:26:06.479134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.479364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.743 [2024-05-15 04:26:06.479389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.479554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.479751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.479776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.479962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.480159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.480184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.480380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.480541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.480564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.480746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.480914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.480945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.481122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.481327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.481352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.481547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.481703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.481732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.481927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.482142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.482168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.482333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.482505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.482529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.482721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.482915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.482945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.483120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.483338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.483362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.483528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.483691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.483715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.483906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.484078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.484103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.484298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.484465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.484490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.484654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.484822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.484847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.485013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.485226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.485253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.485454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.485623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.485648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.485843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.486026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.486061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.486240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.486402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.486427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.486615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.486783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.486809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.487007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.487176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.487205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.487376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.487571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.487595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.487791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.487980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.488006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.488203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.488397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.488421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.488591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.488810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.488835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.489010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.489208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.489233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.489450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.489615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.489640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.489867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.490035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.490060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.744 [2024-05-15 04:26:06.490233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.490431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.744 [2024-05-15 04:26:06.490456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.744 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.490653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.490825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.490850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.491046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.491212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.491236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.491407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.491570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.491595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.491754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.491957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.491985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.492145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.492322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.492347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.492544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.492736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.492761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.492955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.493172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.493197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.493358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.493553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.493578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.493634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6d0b0 (9): Bad file descriptor 00:25:18.745 [2024-05-15 04:26:06.493948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.494167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.494206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.494406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.494598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.494624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.494801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.495000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.495026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.495193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.495388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.495414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.495633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.495818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.495844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.496041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.496197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.496223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.496393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.496587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.496613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.496809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.497012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.497039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.497208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.497370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.497395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.497590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.497782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.497807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.498219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.498416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.498442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.498636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.498804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.498829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.499034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.499205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.499232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.499402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.499596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.499623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.745 qpair failed and we were unable to recover it. 00:25:18.745 [2024-05-15 04:26:06.499795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.499975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.745 [2024-05-15 04:26:06.500001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.500198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.500365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.500392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.500567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.500726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.500752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.500953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.501143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.501169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.501333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.501514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.501540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.501734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.501891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.501916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.502109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.502302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.502327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.502493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.502665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.502692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.502885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.503102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.503128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.503301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.503495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.503521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.503686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.503856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.503887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.504085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.504262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.504287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.504461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.504651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.504676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.504867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.505060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.505086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.505252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.505436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.505461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.505660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.505848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.505873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.506050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.506214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.506240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.506429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.506599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.506624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.506783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.506985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.507011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.507187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.507375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.507401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.507625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.507791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.507818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.507977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.508136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.508162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.508343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.508538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.508565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.508768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.508974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.509000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.509200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.509370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.509396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.509591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.509782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.509808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.509982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.510170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.510196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.510372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.510569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.510595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.510790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.510983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.511009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.746 qpair failed and we were unable to recover it. 00:25:18.746 [2024-05-15 04:26:06.511170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.746 [2024-05-15 04:26:06.511328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.511354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.511545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.511718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.511743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.511939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.512116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.512141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.512306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.512493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.512519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.512697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.512865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.512890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.513065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.513261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.513286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.513471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.513656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.513681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.513854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.514019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.514045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.514211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.514404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.514429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.514617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.514785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.514810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.514993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.515188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.515213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.515373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.515537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.515562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.515762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.515934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.515960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.516162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.516334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.516360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.516525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.516698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.516723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.516919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.517102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.517128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.517307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.517472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.517499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.517671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.517877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.517904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.518106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.518266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.518291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.518460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.518626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.518652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.518823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.519020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.519047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.519211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.519374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.519400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.519574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.519767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.519793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.519959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.520123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.520148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.520345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.520530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.520556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.520751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.520920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.520952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.521148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.521349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.521374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.521567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.521771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.521797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.747 [2024-05-15 04:26:06.521977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.522148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.747 [2024-05-15 04:26:06.522174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.747 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.522378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.522576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.522601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.522794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.522963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.522990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.523187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.523356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.523381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.523579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.523747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.523773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.523967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.524139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.524164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.524356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.524546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.524571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.524738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.524902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.524927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.525100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.525267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.525294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.525452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.525644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.525669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.525853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.526056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.526083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.526278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.526472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.526497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.526665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.526827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.526852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.527022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.527209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.527235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.527424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.527627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.527652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.527818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.528009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.528035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.528203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.528393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.528419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.528614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.528777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.528802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.529008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.529177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.529202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.529423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.529612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.529637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.529802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.529969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.529995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.530158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.530326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.530352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.530549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.530724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.530751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.530915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.531120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.531147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.531324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.531553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.531578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.531745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.531944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.531971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.532141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.532316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.532342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.532536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.532736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.532761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.532957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.533129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.533157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.748 qpair failed and we were unable to recover it. 00:25:18.748 [2024-05-15 04:26:06.533339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.748 [2024-05-15 04:26:06.533537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.533567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.533759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.533923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.533953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.534116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.534301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.534326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.534495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.534701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.534727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.534921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.535150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.535176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.535355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.535524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.535550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.535745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.535937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.535963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.536127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.536300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.536326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.536522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.536694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.536721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.536906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.537079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.537106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.537269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.537436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.537466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.537645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.537832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.537858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.538048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.538216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.538243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.538442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.538612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.538637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.538799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.538970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.538997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.539165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.539325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.539350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.539548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.539719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.539746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.539940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.540109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.540134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.540309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.540530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.540556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.540720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.540887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.540912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.541110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.541305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.541335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.541556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.541750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.541775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.541939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.542115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.542140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.542335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.542496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.542523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.542707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.542936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.542962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.543133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.543294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.543319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.543510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.543703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.543729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.543951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.544125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.544153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.749 qpair failed and we were unable to recover it. 00:25:18.749 [2024-05-15 04:26:06.544347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.544534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.749 [2024-05-15 04:26:06.544560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.544754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.544919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.544950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.545146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.545334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.545364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.545536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.545703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.545729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.545887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.546099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.546126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.546306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.546464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.546490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.546658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.546857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.546883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.547112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.547278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.547304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.547498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.547658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.547683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.547875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.548057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.548084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.548283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.548475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.548501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.548664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.548832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.548859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.549048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.549244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.549270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.549469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.549664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.549689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.549875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.550048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.550074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.550274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.550443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.550471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.550636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.550814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.550840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.551007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.551190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.551215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.551412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.551634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.551659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.551829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.551996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.552022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.552192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.552364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.552392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.552579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.552779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.552805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.553001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.553165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.553191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.553367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.553532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.750 [2024-05-15 04:26:06.553558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.750 qpair failed and we were unable to recover it. 00:25:18.750 [2024-05-15 04:26:06.553755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.553955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.553982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.554145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.554308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.554334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.554525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.554693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.554720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.554879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.555073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.555099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.555262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.555430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.555456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.555653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.555847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.555873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.556063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.556225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.556252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.556415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.556581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.556606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.556768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.556974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.557001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.557175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.557371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.557398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.557561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.557758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.557784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.557978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.558135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.558161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.558384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.558577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.558603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.558773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.558972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.558999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.559187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.559402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.559427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.559620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.559840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.559866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.560034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.560203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.560229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.560426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.560615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.560641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.560846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.561037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.561063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.561270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.561468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.561494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.561663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.561832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.561858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.562023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.562217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.562243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.562438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.562652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.562678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.562882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.563076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.563102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.563275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.563467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.563492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.563661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.563860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.563886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.564058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.564257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.564282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.564453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.564646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.564672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.751 qpair failed and we were unable to recover it. 00:25:18.751 [2024-05-15 04:26:06.564834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.565051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.751 [2024-05-15 04:26:06.565078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.565269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.565433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.565458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.565662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.565858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.565883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.566055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.566248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.566273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.566466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.566636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.566661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.566859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.567028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.567055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.567256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.567431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.567458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.567660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.567877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.567902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.568083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.568267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.568293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.568513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.568680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.568705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.568873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.569073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.569099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.569356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.569525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.569550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.569726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.569926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.569967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.570163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.570332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.570358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.570556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.570750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.570776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.570945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.571121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.571147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.571326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.571521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.571546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.571719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.571895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.571922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.572128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.572312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.572337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.572538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.572736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.572762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.572972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.573180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.573206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.573410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.573584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.573611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.573776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.573955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.573982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.574158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.574326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.574352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.574511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.574732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.574758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.574977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.575140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.575166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.575361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.575557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.575584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.575753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.575921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.575952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.576147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.576315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.752 [2024-05-15 04:26:06.576340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.752 qpair failed and we were unable to recover it. 00:25:18.752 [2024-05-15 04:26:06.576533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.576690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.576715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.576941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.577124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.577150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a44000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.577374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.577613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.577641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.577819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.578023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.578048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.578219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.578393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.578420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.578584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.578759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.578784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.578958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.579151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.579178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.579374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.579567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.579592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.579753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.579920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.579951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.580115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.580303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.580328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.580608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.580774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.580800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.580991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.581164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.581195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.581408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.581602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.581628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.581801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.581992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.582017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.582239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.582434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.582459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.582654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.582847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.582873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.583062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.583226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.583252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.583428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.583598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.583624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.583812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.584005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.584031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.584225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.584391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.584417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.584591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.584794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.584820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.584991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.585169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.585194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.585367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.585547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.585573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.585773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.585954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.585980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.586184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.586386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.586411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.586585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.586751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.586776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.586948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.587114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.587140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.587307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.587503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.587528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.753 qpair failed and we were unable to recover it. 00:25:18.753 [2024-05-15 04:26:06.587726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.753 [2024-05-15 04:26:06.587924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.587954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.588133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.588308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.588334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.588500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.588669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.588695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.588855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.589050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.589076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.589251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.589445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.589470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.589665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.589839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.589865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.590061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.590252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.590278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.590474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.590698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.590722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.590886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.591059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.591085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.591284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.591454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.591479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.591669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.591857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.591882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.592074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.592248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.592273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.592466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.592660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.592685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.592861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.593031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.593058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.593222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.593422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.593448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.593618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.593809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.593833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.594065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.594226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.594251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.594439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.594622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.594647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.594839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.595000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.595026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.595190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.595382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.595407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.595593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.595817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.595843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.596013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.596206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.596232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.596407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.596599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.596624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.754 qpair failed and we were unable to recover it. 00:25:18.754 [2024-05-15 04:26:06.596849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.754 [2024-05-15 04:26:06.597071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.597097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.597297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.597469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.597498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.597699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.597899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.597926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.598130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.598330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.598355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.598524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.598693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.598718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.598898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.599076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.599104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.599265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.599479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.599504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.599697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.599894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.599918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.600095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.600271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.600296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.600493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.600695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.600721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.600884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.601061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.601087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.601263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.601431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.601461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.601633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.601853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.601879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.602069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.602272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.602298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.602500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.602671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.602698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.602892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.603096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.603121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.603321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.603495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.603521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.603719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.603880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.603905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.604086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.604257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.604283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.604478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.604661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.604686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.604883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.605062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.605087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.605254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.605455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.605485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.605658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.605832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.605858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.606070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.606243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.606270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.606493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.606687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.606712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.606913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.607114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.607139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.607326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.607524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.607549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.607743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.607901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.607926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.755 [2024-05-15 04:26:06.608106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.608289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.755 [2024-05-15 04:26:06.608314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.755 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.608482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.608646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.608672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.608876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.609061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.609087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.609256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.609418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.609448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.609644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.609807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.609834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.610035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.610231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.610256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.610421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.610603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.610628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.610825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.610994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.611020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.611209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.611428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.611453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.611647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.611874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.611899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.612185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.612359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.612384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.612576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.612742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.612767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.612987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.613159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.613186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.613359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.613553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.613578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.613782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.613958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.613986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.614179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.614352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.614378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.614538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.614733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.614758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.614949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.615123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.615149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.615311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.615500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.615526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.615694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.615895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.615921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.616100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.616280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.616305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.616470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.616748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.616774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.616969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.617135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.617162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.617364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.617529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.617556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.617723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.617891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.617916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.618105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.618294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.618319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.618519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.618711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.618736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.618895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.619077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.619103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.619300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.619463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.619489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.619712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.619902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.756 [2024-05-15 04:26:06.619935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.756 qpair failed and we were unable to recover it. 00:25:18.756 [2024-05-15 04:26:06.620100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.620287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.620312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.620502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.620691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.620717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.620917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.621101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.621127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.621291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.621480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.621506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.621692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.621882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.621907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a4c000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.622119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.622351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.622378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.622548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.622766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.622792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.622964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.623164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.623191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.623383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.623543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.623568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.623782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.623981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.624008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.624170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.624332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.624357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.624547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.624764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.624789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.624956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.625119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.625145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.625335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.625504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.625529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.625696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.625892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.625918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.626114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.626308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.626333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.626504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.626695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.626720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.626943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.627105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.627130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.627349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.627543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.627570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.627742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.627934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.627960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.628130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.628329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.628354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.628545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.628746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.628771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.628976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.629176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.629201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.629390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.629598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.629623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.629823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.630015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.630041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.630235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.630411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.630436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.630634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.630831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.630857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.631030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.631218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.631243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.757 qpair failed and we were unable to recover it. 00:25:18.757 [2024-05-15 04:26:06.631412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.631580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.757 [2024-05-15 04:26:06.631605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.631804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.631968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.631994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.632170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.632342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.632369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.632544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.632711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.632736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.632905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.633120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.633145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.633339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.633551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.633576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.633751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.633961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.633987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.634181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.634349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.634374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.634566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.634734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.634758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.634954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.635113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.635139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.635330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.635495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.635522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.635688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.635844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.635869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.636065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.636262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.636287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.636451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.636639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.636664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.636830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.636990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.637016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.637218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.637410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.637436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.637631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.637796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.637822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.638015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.638208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.638234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.638448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.638610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.638635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.638805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.638995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.639021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.639186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.639381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.639408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.639605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.639797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.639822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.640012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.640177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.640203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.640392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.640583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.640609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.640800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.640994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.641020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.641216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.641384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.641409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a54000b90 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.641591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.641798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.641826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.642004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.642176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.642201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.758 [2024-05-15 04:26:06.642368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.642537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.758 [2024-05-15 04:26:06.642563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.758 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.642726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.642898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.642923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.643102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.643294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.643318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.643484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.643679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.643704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.643901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.644081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.644106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.644287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.644444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.644469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.644659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.644854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.644880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.645102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.645273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.645300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.645495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.645666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.645691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.645861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.646039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.646064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.646253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.646417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.646442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.646600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.646823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.646848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.647017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.647186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.647211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.647395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.647558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.647582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.647775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.647947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.647972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.648141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.648310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.648335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.648510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.648678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.648703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.648885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.649085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.649111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.649288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.649480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.649510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.649706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.649880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.649907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.650118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.650332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.650358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.650526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.650691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.650716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.650887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.651089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.759 [2024-05-15 04:26:06.651115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.759 qpair failed and we were unable to recover it. 00:25:18.759 [2024-05-15 04:26:06.651293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.651490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.651514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.651680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.651850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.651874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.652074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.652263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.652289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.652508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.652699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.652725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.652915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.653093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.653118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.653287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.653450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.653474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.653669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.653863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.653888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.654109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.654304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.654328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.654492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.654658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.654683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.654869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.655062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.655088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.655278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.655471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.655496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.655687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.655848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.655872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.656066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.656255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.656281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.656481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.656670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.656694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.656884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.657081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.657106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.657300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.657490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.657515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.657682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.657851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.657876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.658073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.658267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.658291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.658482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.658665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.658690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.658866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.659053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.659080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.659274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.659496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.659520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.659685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.659873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.659901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.660076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.660255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.660280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.660453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.660620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.660646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.660819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.660986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.661011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.661201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.661394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.661419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.661582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.661745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.661772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.661996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.662164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.662189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.760 [2024-05-15 04:26:06.662380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.662567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.760 [2024-05-15 04:26:06.662592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.760 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.662783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.663004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.663029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.663229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.663396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.663421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.663616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.663784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.663809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.664009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.664196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.664220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.664391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.664554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.664580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.664737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.664907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.664941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.665137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.665301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.665325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.665512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.665709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.665734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.665916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.666152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.666177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.666334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.666527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.666552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.666743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.666943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.666968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.667167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.667356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.667381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.667569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.667764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.667789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.667981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.668174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.668198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.668418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.668585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.668610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.668805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.668970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.668995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.669174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.669340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.669365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.669561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.669756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.669788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.669969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.670133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.670159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.670357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.670545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.670570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.670769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.670926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.670957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.671122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.671290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.671314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.671534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.671724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.671748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.671904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.672077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.672102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.672296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.672466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.672491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.672684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.672874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.672901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.673112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.673289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.673313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.673505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.673661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.673686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.761 qpair failed and we were unable to recover it. 00:25:18.761 [2024-05-15 04:26:06.673878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.761 [2024-05-15 04:26:06.674072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.674099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.674289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.674489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.674514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.674681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.674845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.674869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.675038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.675237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.675263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.675434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.675602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.675627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.675793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.675988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.676014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.676178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.676339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.676364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.676534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.676723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.676747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.676909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.677068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.677093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.677269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.677454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.677479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.677681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.677855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.677880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.678099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.678298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.678323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.678517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.678709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.678734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.678901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.679109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.679134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.679310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.679471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.679496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.679691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.679855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.679879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.680056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.680229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.680255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.680457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.680628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.680653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.680848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.681011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.681037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.681237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.681397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.681422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.681616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.681789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.681815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.681981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.682154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.682180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.682381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.682542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.682569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.682789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.682988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.683014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.683184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.683377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.683402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.683589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.683784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.683808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.683969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.684160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.684185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.684375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.684547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.684573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.762 [2024-05-15 04:26:06.684799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.684989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.762 [2024-05-15 04:26:06.685014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.762 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.685180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.685347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.685373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.685566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.685751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.685776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.686000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.686194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.686219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.686437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.686643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.686668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.686841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.687040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.687067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.687245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.687424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.687448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.687607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.687772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.687796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.687980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.688174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.688199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.688393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.688559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.688584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.688812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.688979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.689004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.689171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.689365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.689390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.689586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.689755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.689783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.689978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.690146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.690172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.690390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.690569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.690594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.690757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.690958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.690994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.691170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.691339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.691365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.691563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.691781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.691805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.691993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.692184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.692209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.692408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.692578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.692605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.692772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.692971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.692996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.693171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.693370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.693394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.693559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.693750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.693779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.693942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.694120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.694145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.694316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.694502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.694527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.694726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.694883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.694907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.695078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.695270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.695294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.695493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.695663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.695687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.695877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.696043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.763 [2024-05-15 04:26:06.696068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.763 qpair failed and we were unable to recover it. 00:25:18.763 [2024-05-15 04:26:06.696225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.696417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.696443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.696613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.696813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.696837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.697087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.697252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.697276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.697443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.697642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.697666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.697838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.698024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.698051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.698246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.698440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.698464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.698657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.698834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.698859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.699048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.699251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.699277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.699456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.699672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.699696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.699868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.700061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.700086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.700252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.700444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.700470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.700630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.700854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.700879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.701051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.701214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.701238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.701425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.701594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.701619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.701790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.701959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.701984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.702150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.702348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.702372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.702531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.702719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.702744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.702941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.703140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.703165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.703324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.703510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.703537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.703738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.703904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.703936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.704097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.704262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.704287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.704460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.704659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.704683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.704876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.705049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.705074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.705240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.705436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.705463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.764 qpair failed and we were unable to recover it. 00:25:18.764 [2024-05-15 04:26:06.705641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.764 [2024-05-15 04:26:06.705821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.705848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.706019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.706220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.706245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.706434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.706597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.706621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.706811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.706983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.707008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.707175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.707341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.707365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.707560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.707754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.707779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.707981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.708146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.708171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.708369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.708540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.708565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.708758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.708935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.708962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.709144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.709309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.709334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.709499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.709669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.709694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.709855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.710048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.710075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.710270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.710442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.710469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.710666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.710848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.710874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.711037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.711228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.711252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.711414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.711602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.711627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.711850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.712064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.712089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.712257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.712445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.712470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.712643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.712814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.712840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.713039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.713206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.713231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.713395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.713557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.713586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.713778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.713937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.713963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.714135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.714338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.714363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.714555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.714711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.714735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.765 qpair failed and we were unable to recover it. 00:25:18.765 [2024-05-15 04:26:06.714902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.765 [2024-05-15 04:26:06.715099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.715124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.715327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.715545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.715569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.715745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.715939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.715964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.716139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.716333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.716358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.716550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.716740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.716765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.716934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.717157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.717182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.717348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.717514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.717539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.717713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.717902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.717928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.718140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.718306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.718331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.718488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.718707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.718731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.718926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.719092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.719117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.719310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.719478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.719502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.719667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.719826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.719851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.720014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.720187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.720214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.720441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.720604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.720628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.720813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.721015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.721041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.721217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.721386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.721411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.721581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.721795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.721819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.722008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.722171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.722196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.722365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.722557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.722581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.722773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.722939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.722963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.723149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.723322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.723348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.723518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.723689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.723714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.723884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.724091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.724117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.724284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.724455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.724480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.724693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.724864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.724890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.725086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.725262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.725287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.725482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.725655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.725680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.725876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.726049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.726075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.766 qpair failed and we were unable to recover it. 00:25:18.766 [2024-05-15 04:26:06.726272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.766 [2024-05-15 04:26:06.726441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.726466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.726642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.726826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.726851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.727043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.727216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.727240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.727408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.727569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.727594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.727788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.728006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.728032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.728233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.728398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.728423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.728590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.728782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.728807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.728976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.729167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.729192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.729355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.729522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.729546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.729748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.729964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.729989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.730182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.730365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.730391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.730582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.730740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.730765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.730958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.731166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.731190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.731360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.731522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.731547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.731758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.731955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.731980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.732149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.732314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.732339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.732498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.732663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.732687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.732857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.733029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.733056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.733222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.733420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.733449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.733620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.733837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.733861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.734055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.734250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.734275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.734465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.734633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.734658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.734830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.735018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.735043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.735214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.735412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.735436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.735602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.735764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.735789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.736011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.736176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.736202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.736371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.736545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.736569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.736735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.736938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.736965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.767 qpair failed and we were unable to recover it. 00:25:18.767 [2024-05-15 04:26:06.737131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.767 [2024-05-15 04:26:06.737351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.737377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.737561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.737750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.737783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.738002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.738177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.738203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.738380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.738583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.738610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.738798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.738981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.739007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.739203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.739400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.739426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.739620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.739796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.739831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.740045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.740218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.740245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.740473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.740667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.740692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.740863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.741031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.741059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.741232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.741450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.768 [2024-05-15 04:26:06.741478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:18.768 qpair failed and we were unable to recover it. 00:25:18.768 [2024-05-15 04:26:06.741698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.741891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.741916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.742105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.742302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.742335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.742571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.742751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.742784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.742964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.743218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.743252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.743450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.743666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.743699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.743891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.744140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.744168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.744385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.744583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.744609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.744812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.744986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.745012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.745214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.745380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.745405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.745607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.745827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.745852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.746021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.746237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.746262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.746481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.746667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.746692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.746881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.747072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.747098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.747293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.747509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.747536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.747733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.747896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.747921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.748110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.748302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.748327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.748493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.748660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.748685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.040 [2024-05-15 04:26:06.748853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.749020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.040 [2024-05-15 04:26:06.749049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.040 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.749212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.749416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.749441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.749606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.749762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.749787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.749954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.750155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.750180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.750372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.750587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.750611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.750780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.750950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.750975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.751170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.751342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.751367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.751539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.751738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.751762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.751949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.752149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.752174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.752362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.752547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.752571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.752770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.752936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.752962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.753166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.753325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.753350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.753528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.753712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.753737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.753909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.754097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.754126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.754323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.754540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.754565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.754751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.754941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.754966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.755161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.755319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.755343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.755533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.755728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.755752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.755949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.756123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.756147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.756331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.756504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.756529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.756726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.756923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.756952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.757130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.757324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.757349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.757550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.757716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.757740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.757940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.758125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.758155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.758343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.758539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.758563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.758729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.758901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.758927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.759105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.759301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.759325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.759538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.759703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.759728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.759921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.760102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.760129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.760300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.760490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.760515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.041 qpair failed and we were unable to recover it. 00:25:19.041 [2024-05-15 04:26:06.760707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.760900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.041 [2024-05-15 04:26:06.760924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.761126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.761320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.761344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.761510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.761672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.761696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.761925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.762151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.762175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.762350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.762532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.762556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.762714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.762909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.762938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.763148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.763318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.763342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.763508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.763666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.763691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.763854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.764028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.764053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.764217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.764434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.764458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.764675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.764839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.764865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.765060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.765250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.765275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.765496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.765714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.765740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.765951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.766166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.766191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.766387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.766586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.766610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.766811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.766996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.767022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.767190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.767384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.767411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.767603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.767772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.767797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.767967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.768168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.768193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.768356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.768571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.768595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.768794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.769003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.769028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.769225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.769422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.769448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.769612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.769800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.769824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.770021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.770237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.770262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.770449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.770654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.770678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.770885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.771083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.771109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.771303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.771493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.771517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.771693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.771850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.771874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.772055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.772230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.772256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.772459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.772653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.042 [2024-05-15 04:26:06.772678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.042 qpair failed and we were unable to recover it. 00:25:19.042 [2024-05-15 04:26:06.772843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.773043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.773071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.773257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.773427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.773453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.773651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.773865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.773889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.774087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.774251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.774276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.774485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.774657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.774684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.774885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.775059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.775084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.775246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.775444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.775469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.775664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.775831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.775856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.776049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.776221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.776248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.776465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.776632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.776657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.776818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.777015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.777040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.777200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.777419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.777443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.777636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.777795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.777820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.777985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.778210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.778236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.778428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.778618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.778647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.778835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.779005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.779031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.779215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.779404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.779428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.779613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.779770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.779795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.779966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.780185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.780209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.780375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.780559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.780584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.780770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.780926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.780956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.781133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.781298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.781323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.781491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.781684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.781709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.781898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.782116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.782141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.782295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.782459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.782483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.782679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.782847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.782872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.783096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.783263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.783289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.783472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.783669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.783694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.783872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.784040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.784066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.043 [2024-05-15 04:26:06.784233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.784420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.043 [2024-05-15 04:26:06.784445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.043 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.784640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.784818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.784845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.785015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.785185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.785213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.785408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.785610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.785637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.785815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.786001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.786030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.786236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.786435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.786459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.786630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.786828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.786855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.787075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.787276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.787300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.787466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.787654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.787679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.787850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.788043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.788069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.788265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.788453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.788477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.788648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.788847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.788873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.789045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.789240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.789265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.789456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.789620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.789645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.789814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.789980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.790005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.790197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.790389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.790414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.790612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.790787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.790811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.790984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.791183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.791207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.791390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.791587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.791612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.791798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.791989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.792014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.792211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.792382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.792406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.792586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.792781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.792804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.793011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.793215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.793239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.793401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.793615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.793642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.044 [2024-05-15 04:26:06.793831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.794012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.044 [2024-05-15 04:26:06.794037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.044 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.794229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.794420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.794445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.794633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.794817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.794842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.795032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.795194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.795220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.795419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.795613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.795637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.795832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.796000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.796026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.796195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.796394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.796421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.796610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.796807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.796834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.797006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.797230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.797255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.797431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.797619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.797643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.797829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.798029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.798054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.798244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.798444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.798471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.798656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.798855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.798884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.799084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.799260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.799284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.799479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.799667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.799691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.799887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.800087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.800113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.800274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.800493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.800519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.800708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.800907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.800939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.801110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.801277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.801301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.801496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.801680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.801704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.801934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.802103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.802131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.802296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.802488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.802512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.802679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.802842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.802867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.803035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.803203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.803228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.803416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.803608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.803632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.803827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.804043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.804070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.804262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.804420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.804445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.804664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.804828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.804851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.805043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.805234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.805259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.045 qpair failed and we were unable to recover it. 00:25:19.045 [2024-05-15 04:26:06.805481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.805635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.045 [2024-05-15 04:26:06.805659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.805854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.806019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.806045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.806241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.806430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.806455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.806620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.806818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.806842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.807040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.807231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.807257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.807423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.807611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.807636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.807799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.807976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.808001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.808184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.808381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.808406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.808562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.808783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.808808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.809002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.809222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.809246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.809414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.809603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.809628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.809811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.809997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.810021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.810241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.810432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.810457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.810651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.810852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.810877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.811067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.811233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.811258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.811463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.811630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.811654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.811843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.812013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.812041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.812210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.812398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.812426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.812608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.812828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.812853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.813014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.813203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.813228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.813427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.813597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.813624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.813799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.813988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.814013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.814211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.814404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.814428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.814613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.814776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.814802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.815011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.815197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.815222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.815413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.815632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.815656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.815844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.816015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.816040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.816225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.816434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.816458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.816653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.816810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.816838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.817015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.817214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.817239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.046 qpair failed and we were unable to recover it. 00:25:19.046 [2024-05-15 04:26:06.817407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.046 [2024-05-15 04:26:06.817570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.817594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.817791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.817954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.817979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.818142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.818327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.818352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.818551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.818735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.818759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.818924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.819127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.819155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.819350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.819549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.819576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.819765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.819952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.819977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.820144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.820312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.820339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.820534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.820702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.820727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.820915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.821090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.821116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.821310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.821505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.821532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.821706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.821894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.821919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.822124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.822348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.822373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.822539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.822721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.822746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.822951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.823191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.823216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.823443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.823642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.823666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.823848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.824072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.824097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.824260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.824458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.824483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.824644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.824808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.824834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.825009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.825234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.825259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.825446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.825639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.825664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.825835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.826008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.826033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.826228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.826414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.826439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.826662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.826875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.826900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.827095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.827266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.827291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.827492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.827687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.827713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.827915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.828127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.828155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.828355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.828524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.828549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.828743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.828964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.828989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.829176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.829357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.829381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.047 qpair failed and we were unable to recover it. 00:25:19.047 [2024-05-15 04:26:06.829570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.047 [2024-05-15 04:26:06.829753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.829778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.829971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.830189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.830213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.830411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.830580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.830605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.830801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.830963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.830988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.831210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.831400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.831425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.831600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.831795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.831819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.832012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.832188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.832213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.832386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.832561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.832587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.832753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.832949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.832974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.833163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.833320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.833345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.833503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.833725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.833749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.833947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.834130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.834156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.834351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.834537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.834562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.834746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.834920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.834951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.835117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.835313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.835341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.835541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.835709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.835734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.835908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.836119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.836144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.836314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.836532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.836557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.836748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.836908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.836944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.837141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.837333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.837358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.837548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.837744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.837769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.837964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.838157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.838182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.838375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.838529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.838553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.838728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.838918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.838948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.839145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.839327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.839352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.839546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.839729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.839757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.839918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.840109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.840136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.840326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.840516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.840540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.840701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.840910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.840943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.841110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.841309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.048 [2024-05-15 04:26:06.841333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.048 qpair failed and we were unable to recover it. 00:25:19.048 [2024-05-15 04:26:06.841493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.841708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.841732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.841914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.842086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.842111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.842306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.842469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.842495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.842666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.842829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.842854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.843045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.843243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.843268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.843455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.843635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.843663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.843854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.844030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.844056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.844272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.844434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.844461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.844656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.844846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.844870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.845090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.845280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.845306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.845489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.845653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.845678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.845877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.846053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.846078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.846247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.846405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.846430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.846627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.846791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.846817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.846995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.847195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.847220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.847410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.847593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.847618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.847813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.848009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.848035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.848230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.848420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.848444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.848644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.848861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.848886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.849086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.849257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.849284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.849458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.849630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.849655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.849819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.849986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.850011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.850204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.850401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.850427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.850632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.850815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.850840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.049 qpair failed and we were unable to recover it. 00:25:19.049 [2024-05-15 04:26:06.851029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.851227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.049 [2024-05-15 04:26:06.851252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.851444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.851607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.851631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.851816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.852005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.852031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.852251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.852474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.852499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.852661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.852825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.852849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.853037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.853200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.853225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.853413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.853603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.853627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.853815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.854009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.854035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.854236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.854401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.854427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.854649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.854811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.854836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.855006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.855192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.855217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.855404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.855626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.855651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.855832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.856050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.856076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.856248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.856436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.856461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.856630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.856822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.856847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.857016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.857208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.857233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.857403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.857597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.857622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.857818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.857983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.858009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.858197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.858393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.858418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.858607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.858803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.858828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.859030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.859224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.859250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.859440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.859628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.859653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.859833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.860030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.860055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.860270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.860459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.860484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.860652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.860851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.860877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.861071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.861264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.861289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.861513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.861706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.861730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.861912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.862109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.862134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.862329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.862489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.862513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.862712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.862886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.862910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.050 qpair failed and we were unable to recover it. 00:25:19.050 [2024-05-15 04:26:06.863117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.050 [2024-05-15 04:26:06.863312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.863336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.863536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.863705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.863731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.863920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.864091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.864120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.864295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.864462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.864486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.864662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.864863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.864888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.865085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.865256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.865280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.865477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.865669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.865694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.865852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.866023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.866049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.866218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.866384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.866408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.866568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.866789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.866814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.866986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.867150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.867176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.867392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.867582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.867606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.867802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.867964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.867990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.868189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.868378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.868403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.868584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.868752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.868777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.868971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.869133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.869157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.869343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.869528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.869552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.869745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.869919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.869951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.870142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.870309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.870333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.870493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.870662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.870686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.870871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.871040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.871066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.871232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.871403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.871428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.871609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.871810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.871835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.872056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.872237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.872262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.872448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.872661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.872686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.872873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.873065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.873090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.873282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.873479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.873503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.873718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.873885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.873911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.874114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.874306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.874331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.874534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.874707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.874731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.051 qpair failed and we were unable to recover it. 00:25:19.051 [2024-05-15 04:26:06.874922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.051 [2024-05-15 04:26:06.875106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.875130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.875323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.875516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.875540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.875701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.875918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.875957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.876160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.876350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.876374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.876541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.876758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.876783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.876968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.877161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.877186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.877364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.877535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.877560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.877755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.877923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.877955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.878180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.878346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.878370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.878542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.878712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.878736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.878960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.879123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.879148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.879317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.879511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.879536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.879732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.879901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.879926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.880101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.880298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.880323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.880487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.880711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.880736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.880894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.881073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.881098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.881295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.881482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.881506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.881673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.881844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.881870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.882037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.882222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.882247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.882423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.882592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.882616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.882799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.882997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.883032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.883223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.883421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.883446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.883634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.883851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.883876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.884041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.884233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.884262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.884430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.884622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.884647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.884803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.884972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.884997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.885183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.885344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.885369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.885537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.885700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.885724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.885922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.886094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.886118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.886287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.886484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.886510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.052 qpair failed and we were unable to recover it. 00:25:19.052 [2024-05-15 04:26:06.886681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.052 [2024-05-15 04:26:06.886869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.886894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.887067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.887254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.887279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.887481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.887669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.887694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.887884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.888052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.888078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.888260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.888478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.888502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.888665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.888835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.888860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.889053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.889242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.889266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.889455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.889645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.889670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.889838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.890008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.890034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.890222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.890431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.890455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.890670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.890862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.890887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.891083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.891255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.891280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.891471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.891668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.891693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.891887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.892086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.892112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.892307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.892469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.892493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.892717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.892880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.892906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.893099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.893255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.893280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.893467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.893663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.893688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.893886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.894041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.894067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.894255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.894443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.894467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.894687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.894858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.894883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.895074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.895262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.895287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.895508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.895681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.895706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.895901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.896064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.896089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.896275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.896492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.896517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.896679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.896871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.053 [2024-05-15 04:26:06.896896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.053 qpair failed and we were unable to recover it. 00:25:19.053 [2024-05-15 04:26:06.897068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.897268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.897295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.897486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.897652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.897677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.897879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.898049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.898075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.898243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.898434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.898459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.898646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.898807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.898832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.899029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.899217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.899242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.899457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.899645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.899670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.899866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.900058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.900083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.900250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.900412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.900437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.900605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.900768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.900793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.900967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.901162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.901188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.901380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.901566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.901591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.901774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.901966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.901991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.902183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.902380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.902404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.902566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.902749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.902773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.902969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.903144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.903169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.903334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.903525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.903550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.903768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.903965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.903990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.904155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.904346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.904375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.904573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.904740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.904764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.904961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.905151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.905176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.905347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.905545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.905570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.905730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.905939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.905965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.906153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.906372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.906397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.906598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.906789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.906814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.907003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.907196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.907222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.907422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.907638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.907663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.907829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.907993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.908018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.908189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.908382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.908412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.908642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.908802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.054 [2024-05-15 04:26:06.908827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.054 qpair failed and we were unable to recover it. 00:25:19.054 [2024-05-15 04:26:06.909007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.909170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.909196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.909392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.909606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.909630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.909821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.909987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.910013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.910192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.910380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.910404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.910625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.910788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.910813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.911011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.911169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.911194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.911414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.911584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.911609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.911798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.911956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.911981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.912170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.912359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.912384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.912550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.912719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.912743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.912909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.913100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.913125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.913325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.913496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.913523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.913752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.913948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.913973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.914144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.914334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.914358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.914581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.914777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.914802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.914985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.915169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.915194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.915388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.915610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.915635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.915795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.915989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.916015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.916178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.916402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.916427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.916619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.916784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.916809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.917029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.917225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.917250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.917419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.917614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.917640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.917804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.917998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.918023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.918214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.918404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.918428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.918619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.918807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.918833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.919040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.919235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.919261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.919455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.919652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.919677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.919845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.920015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.920041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.920213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.920384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.920408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.055 [2024-05-15 04:26:06.920626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.920819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.055 [2024-05-15 04:26:06.920845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.055 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.921076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.921243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.921268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.921461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.921654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.921679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.921871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.922061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.922086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.922271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.922466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.922491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.922688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.922889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.922914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.923098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.923313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.923338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.923558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.923720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.923745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.923913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.924079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.924106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.924308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.924483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.924508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.924702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.924919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.924953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.925138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.925294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.925319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.925543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.925707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.925734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.925897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.926088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.926114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.926285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.926450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.926475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.926660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.926878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.926903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.927069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.927259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.927284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.927477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.927643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.927667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.927827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.928003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.928029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.928197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.928391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.928416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.928619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.928783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.928811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.928977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.929139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.929164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.929352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.929518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.929545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.929766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.929960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.929985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.930148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.930344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.930368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.930587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.930782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.930807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.931007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.931167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.931192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.931353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.931540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.931565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.931730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.931924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.931955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.932129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.932302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.932326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.056 [2024-05-15 04:26:06.932492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.932686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.056 [2024-05-15 04:26:06.932712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.056 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.932887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.933078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.933104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.933274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.933471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.933496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.933657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.933827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.933854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.934050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.934248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.934273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.934437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.934599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.934624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.934844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.935014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.935040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.935211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.935370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.935395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.935569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.935729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.935757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.935956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.936127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.936152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.936351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.936523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.936548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.936774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.936943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.936971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.937160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.937347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.937372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.937564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.937761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.937786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.937955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.938158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.938183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.938370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.938551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.938576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.938793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.938959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.938984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.939172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.939358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.939383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.939571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.939763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.939788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.939976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.940144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.940171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.940345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.940528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.940553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.940757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.940921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.940951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.941143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.941309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.941333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.941514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.941737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.941762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.941947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.942116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.942141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.942330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.942526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.942551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.942739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.942934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.942959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.943155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.943318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.943342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.943543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.943739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.943763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.943954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.944120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.944145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.057 qpair failed and we were unable to recover it. 00:25:19.057 [2024-05-15 04:26:06.944364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.944535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.057 [2024-05-15 04:26:06.944559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.944730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.944899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.944925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.945131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.945322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.945347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.945517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.945734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.945758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.945950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.946119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.946146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.946338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.946530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.946555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.946773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.946987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.947014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.947185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.947371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.947396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.947589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.947744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.947769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.947948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.948141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.948164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.948339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.948528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.948553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.948743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.948959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.948990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.949205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.949389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.949413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.949590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.949783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.949808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.949971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.950157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.950181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.950377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.950551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.950579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.950766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.950940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.950966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.951139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.951296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.951320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.951483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.951656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.951681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.951846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.952011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.952036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.952227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.952441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.952466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.952666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.952858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.952883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.953062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.953236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.953261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.953426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.953590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.953618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.953820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.954022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.954048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.954248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.954417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.954442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.058 qpair failed and we were unable to recover it. 00:25:19.058 [2024-05-15 04:26:06.954643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.058 [2024-05-15 04:26:06.954833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.954858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.955079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.955240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.955264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.955491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.955688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.955712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.955880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.956043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.956068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.956268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.956482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.956506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.956703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.956901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.956927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.957121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.957315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.957340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.957565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.957782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.957807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.958007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.958170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.958194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.958382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.958559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.958585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.958772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.958937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.958963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.959153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.959317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.959342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.959512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.959703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.959728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.959919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.960122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.960147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.960309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.960481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.960506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.960665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.960862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.960889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.961097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.961294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.961319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.961511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.961677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.961701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.961889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.962052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.962078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.962241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.962403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.962428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.962644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.962843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.962868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.963035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.963194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.963220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.963405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.963599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.963623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.963841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.964062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.964087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.964278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.964447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.964472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.964639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.964831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.964856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.965054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.965227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.965252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.965408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.965629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.965654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.965821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.965988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.966013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.966204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.966372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.966398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.059 qpair failed and we were unable to recover it. 00:25:19.059 [2024-05-15 04:26:06.966606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.059 [2024-05-15 04:26:06.966795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.966820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.967023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.967193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.967219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.967429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.967608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.967633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.967805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.967990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.968015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.968216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.968398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.968422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.968580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.968745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.968771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.968997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.969185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.969216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.969383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.969577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.969603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.969800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.969998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.970023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.970219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.970405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.970430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.970589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.970774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.970799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.971001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.971160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.971184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.971373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.971536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.971560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.971735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.971965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.971991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.972182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.972350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.972375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.972566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.972758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.972782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.972969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.973142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.973172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.973334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.973506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.973533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.973735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.974097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.974125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.974292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.974784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.974811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.974987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.975177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.975202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.975401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:19.060 [2024-05-15 04:26:06.975558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:25:19.060 [2024-05-15 04:26:06.975585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.060 [2024-05-15 04:26:06.975780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.060 [2024-05-15 04:26:06.975974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.976000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.060 [2024-05-15 04:26:06.976165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.976363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.976388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.976554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.976721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.976746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.976953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.977156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.977181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.977375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.977592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.977618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.977814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.977990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.978016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.060 [2024-05-15 04:26:06.978181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.978343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.060 [2024-05-15 04:26:06.978370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.060 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.978540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.978740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.978765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.978945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.979114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.979138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.979338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.979529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.979556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.979724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.979919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.979965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.980141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.980334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.980359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.980541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.980763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.980788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.980945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.981140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.981169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.981338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.981561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.981586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.981754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.981955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.981980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.982169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.982330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.982354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.982544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.982763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.982789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.982958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.983120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.983145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.983352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.983545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.983572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.983770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.983953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.983979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.984148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.984314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.984339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.984530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.984699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.984723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.984877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.985085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.985110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.985312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.985511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.985539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.985704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.985896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.985920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.986126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.986332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.986357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.986539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.986738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.986764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.986978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.987146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.987172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.987337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.987503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.987529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.987729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.987896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.987921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.988169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.988345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.988370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.988540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.988734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.988759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.988927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.989116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.989141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.989358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.989527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.989554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.989752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.989952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.989986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.061 [2024-05-15 04:26:06.990154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.990341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.061 [2024-05-15 04:26:06.990366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.061 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.990561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.990722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.990746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.990903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.991132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.991158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.991391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.991558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.991583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.991783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.991952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.991989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.992156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.992391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.992416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.992612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.992784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.992809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.992983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.993175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.993210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.993414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.993612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.993637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.993845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.994052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.994076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.994248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.994450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.994474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.994639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.994833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.994857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.995041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.995212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.995237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.995436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.995604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.995629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.995803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.995971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.995997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.996155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.996317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.996342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.996542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.996749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.996774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.996971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.997130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.997154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.062 [2024-05-15 04:26:06.997329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:19.062 [2024-05-15 04:26:06.997551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.997577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.062 [2024-05-15 04:26:06.997746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 04:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.062 [2024-05-15 04:26:06.997913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.997944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.998141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.998316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.998340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.998528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.998696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.998720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.998904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.999087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.999112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.999297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.999484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.999509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.062 qpair failed and we were unable to recover it. 00:25:19.062 [2024-05-15 04:26:06.999699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.999896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.062 [2024-05-15 04:26:06.999921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.000136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.000299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.000323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.000515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.000709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.000734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.000958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.001127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.001152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.001308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.001465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.001490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.001650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.001846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.001872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.002080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.002254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.002278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.002473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.002696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.002721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.002885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.003095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.003120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.003320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.003477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.003502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.003672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.003966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.004000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.004173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.004340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.004364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.004570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.004768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.004794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.004985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.005165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.005190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.005362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.005530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.005556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.005724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.005922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.005953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.006110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.006317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.006342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.006515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.006714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.006739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.006934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.007130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.007155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.007342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.007543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.007568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.007764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.007934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.007959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.008135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.008305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.008329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.008494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.008713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.008738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.008960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.009129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.009158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.009362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.009524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.009549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.009875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.010078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.010104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.010312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.010542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.010569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.010743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.010918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.010948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.011150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.011355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.011381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.011546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.011766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.011792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.063 qpair failed and we were unable to recover it. 00:25:19.063 [2024-05-15 04:26:07.011986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.063 [2024-05-15 04:26:07.012178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.012203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.012427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.012593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.012618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.012810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.013004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.013030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.013252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.013453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.013478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.013716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.013895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.013920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.014099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.014287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.014312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.014473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.014668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.014694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.014897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.015111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.015138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.015370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.015561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.015587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.015778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.015943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.015968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.016137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.016343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.016370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.016536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.016716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.016741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.016944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.017150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.017175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.017372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.017545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.017571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.017779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.017985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.018011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.018208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.018402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.018427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.018604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.018827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.018853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.019018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.019335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.019362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.019561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.019778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.019803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.020001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.020170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.020195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.020387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.020545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.020569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.020763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.020935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.020961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.021143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.021352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.021377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.021543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.021743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.021769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.021970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.022167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.022192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.022355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.022526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.022553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.022771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 Malloc0 00:25:19.064 [2024-05-15 04:26:07.022980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.023006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.023208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.064 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:19.064 [2024-05-15 04:26:07.023377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.023402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.064 [2024-05-15 04:26:07.023609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.064 [2024-05-15 04:26:07.023771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.064 [2024-05-15 04:26:07.023796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.064 qpair failed and we were unable to recover it. 00:25:19.064 [2024-05-15 04:26:07.023998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.024232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.024257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.024425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.024621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.024646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.024845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.025024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.025050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.025244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.025410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.025435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.025642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.025819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.025844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.026011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.026207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.026232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.026428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.026465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.065 [2024-05-15 04:26:07.026605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.026630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.026817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.027019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.027044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.027210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.027405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.027429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.027643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.027836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.027861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.028066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.028242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.028266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.028460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.028651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.028676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.028921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.029131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.029155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.029354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.029549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.029573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.029791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.029955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.029984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.030157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.030352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.030376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.030591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.030791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.030816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.030991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.031187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.031211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.031374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.031548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.031575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.031749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.031958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.031984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.032151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.032347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.032371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.032532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.032721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.032745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.032918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.033120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.033144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.033324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.033529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.033554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.033758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.033955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.033987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.034152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.034325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.034349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.034549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.065 [2024-05-15 04:26:07.034745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.034770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.065 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.065 [2024-05-15 04:26:07.034986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.065 [2024-05-15 04:26:07.035178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.035203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.065 qpair failed and we were unable to recover it. 00:25:19.065 [2024-05-15 04:26:07.035394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.035569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.065 [2024-05-15 04:26:07.035594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.035799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.035991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.036016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.036178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.036335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.036359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.036518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.036708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.036734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.036928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.037134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.037159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.037328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.037525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.037551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.037797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.037990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.038017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.038195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.038358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.038389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.038588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.038764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.038789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.038964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.039164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.039187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.039409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.039582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.039608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.039819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.040018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.040045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.040241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.040428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.040452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.040668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.040866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.040890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.041085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.041278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.066 [2024-05-15 04:26:07.041310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.066 qpair failed and we were unable to recover it. 00:25:19.066 [2024-05-15 04:26:07.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.041650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.041677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.041877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.042070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.042096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.042274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.042501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.042535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.326 [2024-05-15 04:26:07.042751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:19.326 [2024-05-15 04:26:07.042936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.042971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 wit 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.326 h addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.326 [2024-05-15 04:26:07.043184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.043368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.043402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.043613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.043822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.043849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.044051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.044252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.044278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.044451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.044672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.044698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.044890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.045090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.045116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.045285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.045442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.045467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.045625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.045788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.045813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.046045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.046271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.046296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.046488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.046680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.046704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.046895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.047072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.047100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.047268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.047434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.047459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.047682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.047847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.047872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.048062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.048261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.048286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.326 qpair failed and we were unable to recover it. 00:25:19.326 [2024-05-15 04:26:07.048482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.326 [2024-05-15 04:26:07.048703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.048727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.048896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.049092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.049118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.049322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.049525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.049550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.049714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.049900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.049925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.050114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.050338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.050362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.050529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.050692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.050718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.327 [2024-05-15 04:26:07.050941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.327 [2024-05-15 04:26:07.051113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.051139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.327 [2024-05-15 04:26:07.051312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.051500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.051524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.051689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.051857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.051881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.052084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.052255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.052280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.052475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.052674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.052702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.052879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.053076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.053102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.053267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.053459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.053484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.053674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.053872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.053896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.054071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.054231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.054255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.054447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.054462] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:19.327 [2024-05-15 04:26:07.054608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.327 [2024-05-15 04:26:07.054632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70420 with addr=10.0.0.2, port=4420 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.054746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.327 [2024-05-15 04:26:07.057283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.327 [2024-05-15 04:26:07.057493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.327 [2024-05-15 04:26:07.057521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.327 [2024-05-15 04:26:07.057537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.327 [2024-05-15 04:26:07.057549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.327 [2024-05-15 04:26:07.057583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.327 04:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 3492014 00:25:19.327 [2024-05-15 04:26:07.067149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.327 [2024-05-15 04:26:07.067328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.327 [2024-05-15 04:26:07.067356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.327 [2024-05-15 04:26:07.067371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.327 [2024-05-15 04:26:07.067384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.327 [2024-05-15 04:26:07.067412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.077132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.327 [2024-05-15 04:26:07.077299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.327 [2024-05-15 04:26:07.077327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.327 [2024-05-15 04:26:07.077343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.327 [2024-05-15 04:26:07.077355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.327 [2024-05-15 04:26:07.077384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.087107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.327 [2024-05-15 04:26:07.087282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.327 [2024-05-15 04:26:07.087312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.327 [2024-05-15 04:26:07.087327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.327 [2024-05-15 04:26:07.087339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.327 [2024-05-15 04:26:07.087368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.327 qpair failed and we were unable to recover it. 00:25:19.327 [2024-05-15 04:26:07.097132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.327 [2024-05-15 04:26:07.097315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.327 [2024-05-15 04:26:07.097342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.327 [2024-05-15 04:26:07.097357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.327 [2024-05-15 04:26:07.097369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.327 [2024-05-15 04:26:07.097397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.107176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.107346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.107373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.107392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.107405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.107433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.117169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.117331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.117356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.117370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.117382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.117409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.127211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.127385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.127411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.127426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.127437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.127465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.137233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.137405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.137430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.137444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.137456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.137485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.147260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.147427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.147452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.147467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.147479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.147506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.157316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.157489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.157516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.157534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.157546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.157575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.167321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.167495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.167521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.167535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.167547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.167575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.177365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.177539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.177564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.177579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.177590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.177618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.187392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.187561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.187587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.187601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.187613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.187640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.197396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.197559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.197585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.197606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.197619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.197647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.207390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.207559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.207595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.207609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.207621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.207648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.217449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.217622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.217649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.217663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.217677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.217706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.227626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.227806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.227832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.227846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.328 [2024-05-15 04:26:07.227858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.328 [2024-05-15 04:26:07.227886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.328 qpair failed and we were unable to recover it. 00:25:19.328 [2024-05-15 04:26:07.237557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.328 [2024-05-15 04:26:07.237768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.328 [2024-05-15 04:26:07.237794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.328 [2024-05-15 04:26:07.237809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.237820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.237847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.247565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.247747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.247772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.247786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.247799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.247826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.257601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.257762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.257789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.257804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.257816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.257844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.267620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.267816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.267841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.267855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.267867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.267894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.277614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.277780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.277805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.277820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.277831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.277858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.287651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.287834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.287861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.287886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.287900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.287937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.297699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.297867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.297893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.297907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.297919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.297954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.307754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.307927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.307959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.307974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.307986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.308014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.317768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.317940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.317966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.317980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.317992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.318020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.327766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.327940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.327966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.327980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.327992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.328020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.329 [2024-05-15 04:26:07.337838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.329 [2024-05-15 04:26:07.338019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.329 [2024-05-15 04:26:07.338047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.329 [2024-05-15 04:26:07.338062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.329 [2024-05-15 04:26:07.338074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.329 [2024-05-15 04:26:07.338103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.329 qpair failed and we were unable to recover it. 00:25:19.589 [2024-05-15 04:26:07.347894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.589 [2024-05-15 04:26:07.348071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.589 [2024-05-15 04:26:07.348098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.589 [2024-05-15 04:26:07.348113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.589 [2024-05-15 04:26:07.348125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.589 [2024-05-15 04:26:07.348154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.589 qpair failed and we were unable to recover it. 00:25:19.589 [2024-05-15 04:26:07.357884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.589 [2024-05-15 04:26:07.358079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.589 [2024-05-15 04:26:07.358105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.589 [2024-05-15 04:26:07.358119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.589 [2024-05-15 04:26:07.358131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.589 [2024-05-15 04:26:07.358159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.589 qpair failed and we were unable to recover it. 00:25:19.589 [2024-05-15 04:26:07.367871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.589 [2024-05-15 04:26:07.368053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.589 [2024-05-15 04:26:07.368079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.589 [2024-05-15 04:26:07.368093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.589 [2024-05-15 04:26:07.368105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.589 [2024-05-15 04:26:07.368133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.589 qpair failed and we were unable to recover it. 00:25:19.589 [2024-05-15 04:26:07.377919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.589 [2024-05-15 04:26:07.378101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.589 [2024-05-15 04:26:07.378132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.589 [2024-05-15 04:26:07.378148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.589 [2024-05-15 04:26:07.378160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.589 [2024-05-15 04:26:07.378188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.589 qpair failed and we were unable to recover it. 00:25:19.589 [2024-05-15 04:26:07.388010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.589 [2024-05-15 04:26:07.388220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.589 [2024-05-15 04:26:07.388245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.589 [2024-05-15 04:26:07.388260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.589 [2024-05-15 04:26:07.388272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.589 [2024-05-15 04:26:07.388299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.589 qpair failed and we were unable to recover it. 00:25:19.589 [2024-05-15 04:26:07.398003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.398169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.398194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.398208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.398220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.398247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.408012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.408189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.408214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.408228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.408240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.408268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.418049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.418222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.418247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.418262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.418273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.418301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.428071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.428237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.428261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.428276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.428288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.428315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.438103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.438270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.438296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.438310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.438322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.438350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.448136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.448306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.448331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.448346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.448358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.448385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.458168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.458335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.458360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.458375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.458386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.458415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.468164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.468336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.468370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.468385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.468398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.468426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.478278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.478439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.478464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.478478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.478491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.478518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.488266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.488445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.488471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.488485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.488497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.488525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.498252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.498415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.498440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.498454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.498466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.498493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.508276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.508443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.508468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.508483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.508495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.508527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.518330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.518498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.518524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.518538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.518550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.518578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.528361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.590 [2024-05-15 04:26:07.528543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.590 [2024-05-15 04:26:07.528569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.590 [2024-05-15 04:26:07.528583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.590 [2024-05-15 04:26:07.528595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.590 [2024-05-15 04:26:07.528623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.590 qpair failed and we were unable to recover it. 00:25:19.590 [2024-05-15 04:26:07.538381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.591 [2024-05-15 04:26:07.538547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.591 [2024-05-15 04:26:07.538572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.591 [2024-05-15 04:26:07.538586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.591 [2024-05-15 04:26:07.538598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.591 [2024-05-15 04:26:07.538626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.591 qpair failed and we were unable to recover it. 00:25:19.591 [2024-05-15 04:26:07.548398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.591 [2024-05-15 04:26:07.548558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.591 [2024-05-15 04:26:07.548583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.591 [2024-05-15 04:26:07.548598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.591 [2024-05-15 04:26:07.548610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.591 [2024-05-15 04:26:07.548637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.591 qpair failed and we were unable to recover it. 00:25:19.591 [2024-05-15 04:26:07.558413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.591 [2024-05-15 04:26:07.558579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.591 [2024-05-15 04:26:07.558609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.591 [2024-05-15 04:26:07.558625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.591 [2024-05-15 04:26:07.558637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.591 [2024-05-15 04:26:07.558664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.591 qpair failed and we were unable to recover it. 00:25:19.591 [2024-05-15 04:26:07.568530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.591 [2024-05-15 04:26:07.568707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.591 [2024-05-15 04:26:07.568733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.591 [2024-05-15 04:26:07.568747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.591 [2024-05-15 04:26:07.568759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.591 [2024-05-15 04:26:07.568786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.591 qpair failed and we were unable to recover it. 00:25:19.591 [2024-05-15 04:26:07.578500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.591 [2024-05-15 04:26:07.578673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.591 [2024-05-15 04:26:07.578698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.591 [2024-05-15 04:26:07.578713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.591 [2024-05-15 04:26:07.578724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.591 [2024-05-15 04:26:07.578752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.591 qpair failed and we were unable to recover it. 00:25:19.591 [2024-05-15 04:26:07.588531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.591 [2024-05-15 04:26:07.588707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.591 [2024-05-15 04:26:07.588732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.591 [2024-05-15 04:26:07.588747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.591 [2024-05-15 04:26:07.588759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.591 [2024-05-15 04:26:07.588787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.591 qpair failed and we were unable to recover it. 00:25:19.591 [2024-05-15 04:26:07.598564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.591 [2024-05-15 04:26:07.598737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.591 [2024-05-15 04:26:07.598764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.591 [2024-05-15 04:26:07.598785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.591 [2024-05-15 04:26:07.598807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.591 [2024-05-15 04:26:07.598851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.591 qpair failed and we were unable to recover it. 00:25:19.850 [2024-05-15 04:26:07.608581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.608770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.608798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.608813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.608825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.608854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.618593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.618761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.618787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.618802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.618814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.618842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.628663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.628833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.628859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.628874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.628886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.628913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.638656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.638824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.638849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.638864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.638876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.638903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.648694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.648865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.648895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.648910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.648922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.648961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.658712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.658878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.658903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.658918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.658940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.658970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.668790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.668968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.668993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.669007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.669019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.669047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.678754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.678977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.679003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.679017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.679029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.679057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.688784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.689005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.689030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.689045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.689062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.689092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.698894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.699071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.699096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.699111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.699123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.699150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.708875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.709051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.709076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.709090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.709102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.709130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.718972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.719137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.719162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.719176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.719188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.719215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.728926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.729112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.729137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.729151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.729163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.851 [2024-05-15 04:26:07.729191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-05-15 04:26:07.738938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.851 [2024-05-15 04:26:07.739127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.851 [2024-05-15 04:26:07.739151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.851 [2024-05-15 04:26:07.739166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.851 [2024-05-15 04:26:07.739178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.739205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.748997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.749202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.749227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.749241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.749253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.749280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.759031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.759208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.759233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.759248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.759260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.759287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.769096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.769270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.769295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.769310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.769322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.769349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.779050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.779220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.779246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.779260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.779277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.779306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.789105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.789280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.789305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.789319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.789331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.789359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.799166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.799363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.799388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.799402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.799414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.799441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.809147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.809324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.809349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.809366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.809378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.809407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.819210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.819395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.819421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.819438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.819452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.819480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.829188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.829404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.829430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.829444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.829456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.829484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.839209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.839376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.839401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.839415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.839427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.839455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.849262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.849433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.849458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.849473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.849485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.849512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-05-15 04:26:07.859258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:19.852 [2024-05-15 04:26:07.859441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:19.852 [2024-05-15 04:26:07.859466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:19.852 [2024-05-15 04:26:07.859480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:19.852 [2024-05-15 04:26:07.859492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:19.852 [2024-05-15 04:26:07.859520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.852 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.869316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.869485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.869512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.869526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.869544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.869574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.879378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.879545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.879571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.879586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.879598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.879627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.889362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.889571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.889596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.889611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.889623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.889650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.899370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.899544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.899569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.899583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.899595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.899622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.909430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.909598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.909623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.909637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.909649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.909677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.919460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.919633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.919658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.919673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.919684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.919712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.929479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.929661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.929686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.929701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.929712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.929740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.939520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.939690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.939715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.939730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.939742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.939770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.949531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.949709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.949734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.949748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.949760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.949787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.959594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.959815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.959840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.959860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.959873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.959901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.969599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.969770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.969796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.969810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.969822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.969850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.979611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.979783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.979809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.979823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.979835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.979863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.112 [2024-05-15 04:26:07.989637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.112 [2024-05-15 04:26:07.989802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.112 [2024-05-15 04:26:07.989827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.112 [2024-05-15 04:26:07.989842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.112 [2024-05-15 04:26:07.989854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.112 [2024-05-15 04:26:07.989881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.112 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:07.999651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:07.999818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:07.999842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:07.999856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:07.999868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:07.999896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.009797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.009981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.010006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.010021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.010032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.010060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.019765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.019943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.019968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.019982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.019994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.020022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.029768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.029973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.029998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.030013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.030029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.030059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.039807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.039976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.040001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.040016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.040028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.040056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.049821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.050009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.050034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.050055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.050069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.050098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.059888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.060119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.060146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.060160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.060172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.060200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.069899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.070084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.070109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.070124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.070135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.070163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.079963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.080131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.080156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.080171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.080182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.080210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.089950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.090115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.090141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.090155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.090166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.090194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.099984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.100177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.100202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.100217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.100229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.100256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.110050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.110221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.110246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.110261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.110273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.110300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.113 [2024-05-15 04:26:08.120000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.113 [2024-05-15 04:26:08.120182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.113 [2024-05-15 04:26:08.120207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.113 [2024-05-15 04:26:08.120221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.113 [2024-05-15 04:26:08.120233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.113 [2024-05-15 04:26:08.120261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.113 qpair failed and we were unable to recover it. 00:25:20.372 [2024-05-15 04:26:08.130059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.372 [2024-05-15 04:26:08.130260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.372 [2024-05-15 04:26:08.130288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.372 [2024-05-15 04:26:08.130307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.372 [2024-05-15 04:26:08.130319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.372 [2024-05-15 04:26:08.130349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.372 qpair failed and we were unable to recover it. 00:25:20.372 [2024-05-15 04:26:08.140098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.372 [2024-05-15 04:26:08.140284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.372 [2024-05-15 04:26:08.140317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.372 [2024-05-15 04:26:08.140333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.372 [2024-05-15 04:26:08.140346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.372 [2024-05-15 04:26:08.140374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.372 qpair failed and we were unable to recover it. 00:25:20.372 [2024-05-15 04:26:08.150098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.372 [2024-05-15 04:26:08.150270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.372 [2024-05-15 04:26:08.150296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.372 [2024-05-15 04:26:08.150311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.372 [2024-05-15 04:26:08.150323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.372 [2024-05-15 04:26:08.150351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.372 qpair failed and we were unable to recover it. 00:25:20.372 [2024-05-15 04:26:08.160111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.372 [2024-05-15 04:26:08.160275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.372 [2024-05-15 04:26:08.160301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.372 [2024-05-15 04:26:08.160315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.372 [2024-05-15 04:26:08.160327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.372 [2024-05-15 04:26:08.160355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.372 qpair failed and we were unable to recover it. 00:25:20.372 [2024-05-15 04:26:08.170171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.372 [2024-05-15 04:26:08.170351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.372 [2024-05-15 04:26:08.170376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.372 [2024-05-15 04:26:08.170391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.372 [2024-05-15 04:26:08.170403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.372 [2024-05-15 04:26:08.170431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.372 qpair failed and we were unable to recover it. 00:25:20.372 [2024-05-15 04:26:08.180200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.372 [2024-05-15 04:26:08.180417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.372 [2024-05-15 04:26:08.180442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.372 [2024-05-15 04:26:08.180457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.372 [2024-05-15 04:26:08.180469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.372 [2024-05-15 04:26:08.180497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.372 qpair failed and we were unable to recover it. 00:25:20.372 [2024-05-15 04:26:08.190281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.190449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.190475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.190489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.190501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.190528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.200237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.200402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.200427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.200441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.200453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.200481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.210273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.210441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.210466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.210481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.210493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.210521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.220296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.220466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.220491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.220506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.220518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.220545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.230381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.230591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.230625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.230643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.230656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.230685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.240361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.240521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.240547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.240561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.240573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.240601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.250480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.250655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.250679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.250694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.250705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.250733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.260400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.260579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.260605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.260620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.260632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.260660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.270546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.270710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.270735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.270749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.270761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.270794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.280544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.280736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.280761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.280776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.280788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.280815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.290578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.290779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.290804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.290818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.290830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.290857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.300537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.300745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.300771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.300785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.300797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.300825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.310588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.310757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.310783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.310797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.310809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.310837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.320589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.320756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.320786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.320802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.320814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.320841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.373 qpair failed and we were unable to recover it. 00:25:20.373 [2024-05-15 04:26:08.330716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.373 [2024-05-15 04:26:08.330892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.373 [2024-05-15 04:26:08.330917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.373 [2024-05-15 04:26:08.330938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.373 [2024-05-15 04:26:08.330952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.373 [2024-05-15 04:26:08.330980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.374 qpair failed and we were unable to recover it. 00:25:20.374 [2024-05-15 04:26:08.340684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.374 [2024-05-15 04:26:08.340857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.374 [2024-05-15 04:26:08.340882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.374 [2024-05-15 04:26:08.340897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.374 [2024-05-15 04:26:08.340909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.374 [2024-05-15 04:26:08.340942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.374 qpair failed and we were unable to recover it. 00:25:20.374 [2024-05-15 04:26:08.350757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.374 [2024-05-15 04:26:08.350940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.374 [2024-05-15 04:26:08.350966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.374 [2024-05-15 04:26:08.350980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.374 [2024-05-15 04:26:08.350992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.374 [2024-05-15 04:26:08.351020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.374 qpair failed and we were unable to recover it. 00:25:20.374 [2024-05-15 04:26:08.360686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.374 [2024-05-15 04:26:08.360859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.374 [2024-05-15 04:26:08.360885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.374 [2024-05-15 04:26:08.360900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.374 [2024-05-15 04:26:08.360912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.374 [2024-05-15 04:26:08.360952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.374 qpair failed and we were unable to recover it. 00:25:20.374 [2024-05-15 04:26:08.370738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.374 [2024-05-15 04:26:08.370912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.374 [2024-05-15 04:26:08.370942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.374 [2024-05-15 04:26:08.370957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.374 [2024-05-15 04:26:08.370970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.374 [2024-05-15 04:26:08.370998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.374 qpair failed and we were unable to recover it. 00:25:20.374 [2024-05-15 04:26:08.380757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.374 [2024-05-15 04:26:08.380940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.374 [2024-05-15 04:26:08.380966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.374 [2024-05-15 04:26:08.380980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.374 [2024-05-15 04:26:08.380992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.374 [2024-05-15 04:26:08.381020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.374 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.390792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.391012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.391040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.391055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.391067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.391095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.400887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.401075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.401102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.401117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.401129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.401158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.410856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.411072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.411102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.411118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.411130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.411158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.420852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.421027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.421053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.421067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.421079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.421106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.430903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.431077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.431103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.431117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.431129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.431157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.441012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.441181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.441206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.441220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.441232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.441260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.450966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.451144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.451169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.451183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.451200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.451229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.460990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.461210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.461236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.461251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.461263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.461290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.471072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.471247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.471272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.471287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.471299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.471329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.481045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.481218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.481244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.481258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.481270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.481299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.491071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.491244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.491270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.491284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.491296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.491324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.501085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.633 [2024-05-15 04:26:08.501258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.633 [2024-05-15 04:26:08.501283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.633 [2024-05-15 04:26:08.501297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.633 [2024-05-15 04:26:08.501309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.633 [2024-05-15 04:26:08.501336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-05-15 04:26:08.511159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.511328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.511352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.511367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.511379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.511407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.521152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.521317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.521342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.521356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.521368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.521395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.531257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.531460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.531484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.531499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.531511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.531538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.541318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.541500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.541524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.541538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.541556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.541585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.551268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.551494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.551519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.551533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.551545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.551573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.561319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.561490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.561516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.561530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.561542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.561571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.571404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.571584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.571609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.571623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.571638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.571667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.581432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.581605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.581632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.581649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.581663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.581691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.591399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.591615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.591641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.591656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.591668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.591696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.601469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.601634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.601659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.601674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.601687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.601714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.611425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.611593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.611618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.611632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.611644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.611672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.621451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.621623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.621648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.634 [2024-05-15 04:26:08.621663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.634 [2024-05-15 04:26:08.621675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.634 [2024-05-15 04:26:08.621702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-05-15 04:26:08.631562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.634 [2024-05-15 04:26:08.631727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.634 [2024-05-15 04:26:08.631752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.635 [2024-05-15 04:26:08.631766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.635 [2024-05-15 04:26:08.631783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.635 [2024-05-15 04:26:08.631811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-05-15 04:26:08.641582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.635 [2024-05-15 04:26:08.641765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.635 [2024-05-15 04:26:08.641790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.635 [2024-05-15 04:26:08.641804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.635 [2024-05-15 04:26:08.641816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.635 [2024-05-15 04:26:08.641843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.894 [2024-05-15 04:26:08.651546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.894 [2024-05-15 04:26:08.651722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.894 [2024-05-15 04:26:08.651749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.894 [2024-05-15 04:26:08.651764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.894 [2024-05-15 04:26:08.651776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.894 [2024-05-15 04:26:08.651805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-05-15 04:26:08.661587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.894 [2024-05-15 04:26:08.661754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.894 [2024-05-15 04:26:08.661781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.894 [2024-05-15 04:26:08.661796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.894 [2024-05-15 04:26:08.661807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.894 [2024-05-15 04:26:08.661835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-05-15 04:26:08.671623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.894 [2024-05-15 04:26:08.671832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.894 [2024-05-15 04:26:08.671857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.894 [2024-05-15 04:26:08.671872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.671884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b70420 00:25:20.895 [2024-05-15 04:26:08.671912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.681662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.681851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.681884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.681901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.681913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.681952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.691718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.691905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.691941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.691959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.691971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.692002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.701697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.701870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.701896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.701911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.701923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.701962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.711769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.711942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.711978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.711993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.712005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.712035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.721778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.721963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.721989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.722009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.722022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.722052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.731760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.731983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.732009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.732024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.732036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.732068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.741777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.741956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.741990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.742005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.742017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.742046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.751860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.752034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.752061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.752076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.752088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.752117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.761862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.762034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.762061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.762075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.762087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.762129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.771879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.772072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.772098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.772112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.772125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.772154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.781897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.782088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.782115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.782129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.782140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.782171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.791921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.792092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.792118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.792133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.792146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.792176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.801975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.802181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.802208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.895 [2024-05-15 04:26:08.802223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.895 [2024-05-15 04:26:08.802236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.895 [2024-05-15 04:26:08.802266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-05-15 04:26:08.812040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.895 [2024-05-15 04:26:08.812233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.895 [2024-05-15 04:26:08.812263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.812279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.812291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.812333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.822022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.822198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.822226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.822244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.822257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.822287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.832033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.832199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.832225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.832239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.832251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.832281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.842098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.842293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.842319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.842333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.842346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.842386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.852092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.852266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.852292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.852307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.852319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.852353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.862138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.862305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.862330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.862344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.862356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.862397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.872142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.872311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.872336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.872351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.872363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.872391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.882187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.882373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.882400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.882414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.882426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.882455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.892225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.892439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.892465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.892479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.892491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.892520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-05-15 04:26:08.902211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:20.896 [2024-05-15 04:26:08.902376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:20.896 [2024-05-15 04:26:08.902407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:20.896 [2024-05-15 04:26:08.902422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:20.896 [2024-05-15 04:26:08.902435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:20.896 [2024-05-15 04:26:08.902463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.896 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.912265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.912432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.912457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.912472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.912484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.912513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.156 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.922279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.922437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.922463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.922478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.922489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.922530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.156 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.932312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.932480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.932505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.932520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.932532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.932561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.156 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.942325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.942494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.942520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.942534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.942547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.942581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.156 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.952406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.952598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.952624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.952639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.952654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.952695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.156 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.962386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.962564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.962590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.962605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.962617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.962646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.156 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.972425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.972599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.972624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.972638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.972651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.972679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.156 qpair failed and we were unable to recover it. 00:25:21.156 [2024-05-15 04:26:08.982452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.156 [2024-05-15 04:26:08.982621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.156 [2024-05-15 04:26:08.982646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.156 [2024-05-15 04:26:08.982661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.156 [2024-05-15 04:26:08.982673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.156 [2024-05-15 04:26:08.982701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:08.992462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:08.992628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:08.992658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:08.992674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:08.992686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:08.992715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.002531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.002701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.002726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.002741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.002753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.002782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.012598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.012792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.012817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.012832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.012844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.012872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.022573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.022793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.022818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.022832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.022845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.022874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.032634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.032806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.032830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.032844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.032862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.032906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.042640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.042855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.042882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.042896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.042908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.042944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.052689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.052911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.052947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.052964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.052976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.053006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.062712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.062883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.062908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.062923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.062942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.062972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.072707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.072873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.072898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.072913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.072925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.072963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.082733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.082905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.082936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.082952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.082964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.082993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.092777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.092960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.092986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.093000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.093012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.093041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.102818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.102998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.103024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.103038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.103050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.103079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.112828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.112998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.113022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.113037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.113049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.113078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.122842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.123022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.123047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.123067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.123080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.123109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.132904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.133118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.133144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.133158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.133170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.133199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.142948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.143139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.143164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.143179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.143191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.143220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.157 [2024-05-15 04:26:09.152991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.157 [2024-05-15 04:26:09.153159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.157 [2024-05-15 04:26:09.153187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.157 [2024-05-15 04:26:09.153203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.157 [2024-05-15 04:26:09.153215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.157 [2024-05-15 04:26:09.153257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.157 qpair failed and we were unable to recover it. 00:25:21.158 [2024-05-15 04:26:09.162970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.158 [2024-05-15 04:26:09.163150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.158 [2024-05-15 04:26:09.163176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.158 [2024-05-15 04:26:09.163190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.158 [2024-05-15 04:26:09.163202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.158 [2024-05-15 04:26:09.163231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.158 qpair failed and we were unable to recover it. 00:25:21.416 [2024-05-15 04:26:09.173069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.416 [2024-05-15 04:26:09.173247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.416 [2024-05-15 04:26:09.173274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.416 [2024-05-15 04:26:09.173292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.416 [2024-05-15 04:26:09.173304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.173334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.183072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.183262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.183289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.183308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.183321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.183352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.193061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.193226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.193252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.193266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.193278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.193308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.203130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.203293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.203319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.203333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.203345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.203373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.213200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.213374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.213399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.213421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.213434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.213463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.223157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.223349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.223374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.223389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.223401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.223430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.233172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.233339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.233365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.233379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.233391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.233420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.243309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.243514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.243540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.243555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.243566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.243595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.253355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.253556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.253581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.253595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.253608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.253648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.263362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.263532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.263558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.263573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.263585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.263625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.273337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.273559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.273585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.273599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.273611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.273640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.283332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.283546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.283572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.283586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.283598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.283627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.293379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.293555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.293579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.293594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.293606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.293635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.303459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.303649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.303680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.303696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.417 [2024-05-15 04:26:09.303708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.417 [2024-05-15 04:26:09.303748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.417 qpair failed and we were unable to recover it. 00:25:21.417 [2024-05-15 04:26:09.313488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.417 [2024-05-15 04:26:09.313677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.417 [2024-05-15 04:26:09.313703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.417 [2024-05-15 04:26:09.313718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.313730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.313759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.323434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.323611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.323636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.323650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.323662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.323691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.333599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.333790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.333814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.333829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.333841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.333870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.343497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.343666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.343691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.343705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.343717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.343751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.353527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.353691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.353717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.353731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.353743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.353772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.363555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.363719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.363745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.363760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.363772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.363801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.373684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.373858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.373884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.373898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.373910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.373946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.383731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.383913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.383946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.383962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.383974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.384004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.393737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.393903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.393941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.393958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.393970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.393999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.403664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.403901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.403926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.403951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.403964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.403993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.413703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.413921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.413953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.413968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.413981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.414010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.418 [2024-05-15 04:26:09.423736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.418 [2024-05-15 04:26:09.423904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.418 [2024-05-15 04:26:09.423939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.418 [2024-05-15 04:26:09.423956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.418 [2024-05-15 04:26:09.423968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.418 [2024-05-15 04:26:09.423998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.418 qpair failed and we were unable to recover it. 00:25:21.677 [2024-05-15 04:26:09.433777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.677 [2024-05-15 04:26:09.433982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.677 [2024-05-15 04:26:09.434007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.677 [2024-05-15 04:26:09.434021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.677 [2024-05-15 04:26:09.434039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.677 [2024-05-15 04:26:09.434069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.677 qpair failed and we were unable to recover it. 00:25:21.677 [2024-05-15 04:26:09.443756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.677 [2024-05-15 04:26:09.443949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.677 [2024-05-15 04:26:09.443975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.677 [2024-05-15 04:26:09.443989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.677 [2024-05-15 04:26:09.444001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.677 [2024-05-15 04:26:09.444029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.677 qpair failed and we were unable to recover it. 00:25:21.677 [2024-05-15 04:26:09.453887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.677 [2024-05-15 04:26:09.454070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.677 [2024-05-15 04:26:09.454096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.677 [2024-05-15 04:26:09.454111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.677 [2024-05-15 04:26:09.454123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.677 [2024-05-15 04:26:09.454152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.677 qpair failed and we were unable to recover it. 00:25:21.677 [2024-05-15 04:26:09.463842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.677 [2024-05-15 04:26:09.464032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.677 [2024-05-15 04:26:09.464060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.677 [2024-05-15 04:26:09.464078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.677 [2024-05-15 04:26:09.464090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.677 [2024-05-15 04:26:09.464121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.677 qpair failed and we were unable to recover it. 00:25:21.677 [2024-05-15 04:26:09.473948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.677 [2024-05-15 04:26:09.474117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.677 [2024-05-15 04:26:09.474143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.677 [2024-05-15 04:26:09.474158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.677 [2024-05-15 04:26:09.474170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.677 [2024-05-15 04:26:09.474199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.677 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.483913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.484114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.484141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.484155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.484167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.484197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.493927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.494142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.494167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.494181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.494193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.494222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.503947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.504125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.504150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.504164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.504176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.504205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.513973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.514150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.514175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.514190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.514202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.514231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.523990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.524155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.524179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.524199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.524212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.524242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.534062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.534234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.534260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.534274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.534286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.534315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.544106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.544315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.544341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.544355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.544367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.544396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.554107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.554276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.554301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.554315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.554327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.554356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.564200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.564368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.564394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.564408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.564420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.564449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.574151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.574334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.574359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.574374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.574386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.574415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.584176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.584352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.584378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.584393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.584405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.584445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.594172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.594341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.594366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.594380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.594392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.594421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.604224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.604398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.604423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.604437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.604452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.604482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.678 qpair failed and we were unable to recover it. 00:25:21.678 [2024-05-15 04:26:09.614251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.678 [2024-05-15 04:26:09.614426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.678 [2024-05-15 04:26:09.614451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.678 [2024-05-15 04:26:09.614472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.678 [2024-05-15 04:26:09.614485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.678 [2024-05-15 04:26:09.614513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.679 [2024-05-15 04:26:09.624314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.679 [2024-05-15 04:26:09.624490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.679 [2024-05-15 04:26:09.624515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.679 [2024-05-15 04:26:09.624529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.679 [2024-05-15 04:26:09.624541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.679 [2024-05-15 04:26:09.624570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.679 [2024-05-15 04:26:09.634338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.679 [2024-05-15 04:26:09.634509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.679 [2024-05-15 04:26:09.634534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.679 [2024-05-15 04:26:09.634549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.679 [2024-05-15 04:26:09.634561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.679 [2024-05-15 04:26:09.634591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.679 [2024-05-15 04:26:09.644350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.679 [2024-05-15 04:26:09.644536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.679 [2024-05-15 04:26:09.644561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.679 [2024-05-15 04:26:09.644576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.679 [2024-05-15 04:26:09.644588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.679 [2024-05-15 04:26:09.644628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.679 [2024-05-15 04:26:09.654341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.679 [2024-05-15 04:26:09.654526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.679 [2024-05-15 04:26:09.654552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.679 [2024-05-15 04:26:09.654566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.679 [2024-05-15 04:26:09.654579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.679 [2024-05-15 04:26:09.654607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.679 [2024-05-15 04:26:09.664364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.679 [2024-05-15 04:26:09.664536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.679 [2024-05-15 04:26:09.664562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.679 [2024-05-15 04:26:09.664576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.679 [2024-05-15 04:26:09.664589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.679 [2024-05-15 04:26:09.664618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.679 [2024-05-15 04:26:09.674389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.679 [2024-05-15 04:26:09.674579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.679 [2024-05-15 04:26:09.674604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.679 [2024-05-15 04:26:09.674618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.679 [2024-05-15 04:26:09.674630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.679 [2024-05-15 04:26:09.674659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.679 [2024-05-15 04:26:09.684421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.679 [2024-05-15 04:26:09.684590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.679 [2024-05-15 04:26:09.684616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.679 [2024-05-15 04:26:09.684630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.679 [2024-05-15 04:26:09.684642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.679 [2024-05-15 04:26:09.684671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.679 qpair failed and we were unable to recover it. 00:25:21.941 [2024-05-15 04:26:09.694553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.941 [2024-05-15 04:26:09.694720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.941 [2024-05-15 04:26:09.694745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.941 [2024-05-15 04:26:09.694759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.941 [2024-05-15 04:26:09.694772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.941 [2024-05-15 04:26:09.694800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.941 qpair failed and we were unable to recover it. 00:25:21.941 [2024-05-15 04:26:09.704504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.941 [2024-05-15 04:26:09.704679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.941 [2024-05-15 04:26:09.704709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.941 [2024-05-15 04:26:09.704725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.941 [2024-05-15 04:26:09.704736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.941 [2024-05-15 04:26:09.704765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.941 qpair failed and we were unable to recover it. 00:25:21.941 [2024-05-15 04:26:09.714555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.941 [2024-05-15 04:26:09.714732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.941 [2024-05-15 04:26:09.714758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.941 [2024-05-15 04:26:09.714772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.714784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.714813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.724531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.724693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.724718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.724733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.724745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.724773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.734596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.734764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.734789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.734803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.734815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.734856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.744658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.744869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.744895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.744909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.744921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.744975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.754618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.754833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.754858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.754872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.754884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.754924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.764649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.764821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.764847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.764862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.764874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.764903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.774683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.774856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.774881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.774895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.774908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.774957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.784696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.784861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.784886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.784900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.784912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.784949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.794738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.794923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.794969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.794987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.794998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.795028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.804780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.804967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.804993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.805007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.805019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.805049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.814798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.814969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.814994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.815008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.815020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.815050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.824828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.825011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.825037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.825052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.825064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.825094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.834882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.835060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.835085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.835100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.835118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.835147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.844891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.845107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.845133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.942 [2024-05-15 04:26:09.845148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.942 [2024-05-15 04:26:09.845160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.942 [2024-05-15 04:26:09.845190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.942 qpair failed and we were unable to recover it. 00:25:21.942 [2024-05-15 04:26:09.854936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.942 [2024-05-15 04:26:09.855109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.942 [2024-05-15 04:26:09.855134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.855149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.855161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.855190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.864957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.865168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.865193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.865207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.865220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.865248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.874995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.875162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.875187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.875202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.875214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.875243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.885062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.885234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.885259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.885274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.885286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.885316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.895064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.895236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.895261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.895275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.895287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.895317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.905178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.905353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.905378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.905392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.905404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.905433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.915079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.915253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.915278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.915292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.915304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.915333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.925113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.925301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.925326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.925340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.925358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.925387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.935176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.935355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.935380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.935395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.935407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.935435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:21.943 [2024-05-15 04:26:09.945272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:21.943 [2024-05-15 04:26:09.945463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:21.943 [2024-05-15 04:26:09.945487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:21.943 [2024-05-15 04:26:09.945502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:21.943 [2024-05-15 04:26:09.945514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:21.943 [2024-05-15 04:26:09.945542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:21.943 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:09.955216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:09.955391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:09.955416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:09.955430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:09.955442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:22.203 [2024-05-15 04:26:09.955471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:09.965309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:09.965477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:09.965503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:09.965516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:09.965529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a54000b90 00:25:22.203 [2024-05-15 04:26:09.965558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:09.975313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:09.975496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:09.975528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:09.975544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:09.975557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:09.975588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:09.985383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:09.985563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:09.985591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:09.985609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:09.985622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:09.985652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:09.995360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:09.995531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:09.995558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:09.995573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:09.995586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:09.995615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:10.005547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:10.005733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:10.005763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:10.005779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:10.005791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:10.005823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:10.015401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:10.015571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:10.015599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:10.015620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:10.015633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:10.015664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:10.025416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:10.025598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:10.025625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:10.025640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:10.025653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:10.025683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:10.035445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:10.035655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:10.035681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:10.035696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:10.035708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:10.035739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:10.045569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:10.045743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:10.045769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:10.045784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:10.045796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:10.045826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.203 qpair failed and we were unable to recover it. 00:25:22.203 [2024-05-15 04:26:10.055510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.203 [2024-05-15 04:26:10.055682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.203 [2024-05-15 04:26:10.055709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.203 [2024-05-15 04:26:10.055723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.203 [2024-05-15 04:26:10.055736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.203 [2024-05-15 04:26:10.055766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.065641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.065838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.065864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.065879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.065891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.065920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.075571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.075753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.075780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.075795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.075810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.075840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.085591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.085762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.085790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.085804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.085817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.085846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.095597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.095765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.095791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.095806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.095819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.095849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.105623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.105793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.105832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.105848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.105860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.105890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.115655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.115822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.115847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.115862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.115875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.115904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.125664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.125829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.125855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.125870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.125883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.125912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.135699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.135883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.135918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.135940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.135954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.135984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.145731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.145897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.145923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.145947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.145960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.145995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.155832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.156010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.156036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.156054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.156067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.156097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.165797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.165993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.204 [2024-05-15 04:26:10.166020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.204 [2024-05-15 04:26:10.166035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.204 [2024-05-15 04:26:10.166046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.204 [2024-05-15 04:26:10.166076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.204 qpair failed and we were unable to recover it. 00:25:22.204 [2024-05-15 04:26:10.175867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.204 [2024-05-15 04:26:10.176092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.205 [2024-05-15 04:26:10.176120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.205 [2024-05-15 04:26:10.176136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.205 [2024-05-15 04:26:10.176148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.205 [2024-05-15 04:26:10.176178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.205 qpair failed and we were unable to recover it. 00:25:22.205 [2024-05-15 04:26:10.185869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.205 [2024-05-15 04:26:10.186085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.205 [2024-05-15 04:26:10.186112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.205 [2024-05-15 04:26:10.186126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.205 [2024-05-15 04:26:10.186139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.205 [2024-05-15 04:26:10.186169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.205 qpair failed and we were unable to recover it. 00:25:22.205 [2024-05-15 04:26:10.195873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.205 [2024-05-15 04:26:10.196051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.205 [2024-05-15 04:26:10.196083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.205 [2024-05-15 04:26:10.196099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.205 [2024-05-15 04:26:10.196111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.205 [2024-05-15 04:26:10.196141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.205 qpair failed and we were unable to recover it. 00:25:22.205 [2024-05-15 04:26:10.205906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.205 [2024-05-15 04:26:10.206092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.205 [2024-05-15 04:26:10.206118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.205 [2024-05-15 04:26:10.206133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.205 [2024-05-15 04:26:10.206145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.205 [2024-05-15 04:26:10.206174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.205 qpair failed and we were unable to recover it. 00:25:22.205 [2024-05-15 04:26:10.215950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.205 [2024-05-15 04:26:10.216168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.205 [2024-05-15 04:26:10.216194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.205 [2024-05-15 04:26:10.216208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.205 [2024-05-15 04:26:10.216220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.205 [2024-05-15 04:26:10.216250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.205 qpair failed and we were unable to recover it. 00:25:22.464 [2024-05-15 04:26:10.225981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.464 [2024-05-15 04:26:10.226178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.464 [2024-05-15 04:26:10.226204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.464 [2024-05-15 04:26:10.226219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.464 [2024-05-15 04:26:10.226231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.464 [2024-05-15 04:26:10.226261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.464 qpair failed and we were unable to recover it. 00:25:22.464 [2024-05-15 04:26:10.236027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.236215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.236241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.236255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.236267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.236302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.246014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.246191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.246217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.246232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.246244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.246273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.256098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.256329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.256355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.256369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.256380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.256409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.266088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.266301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.266326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.266340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.266352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.266381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.276098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.276278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.276303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.276318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.276330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.276359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.286138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.286313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.286339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.286354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.286366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.286395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.296193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.296368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.296395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.296410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.296423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.296453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.306198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.306366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.306392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.306407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.306419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.306447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.316316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.316482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.316508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.316523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.316535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.316564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.326232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.326397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.326424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.326438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.326456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.326486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.336356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.336588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.336625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.336639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.336651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.336693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.346332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.346568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.346594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.346609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.346621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.346662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.356353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.356531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.356557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.356571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.356583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.356613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.366402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.465 [2024-05-15 04:26:10.366601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.465 [2024-05-15 04:26:10.366627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.465 [2024-05-15 04:26:10.366642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.465 [2024-05-15 04:26:10.366654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.465 [2024-05-15 04:26:10.366683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 qpair failed and we were unable to recover it. 00:25:22.465 [2024-05-15 04:26:10.376408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.376620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.376646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.376660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.376672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.376701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.386449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.386655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.386683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.386699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.386711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.386741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.396491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.396697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.396733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.396751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.396764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.396795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.406544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.406711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.406747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.406762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.406774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.406817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.416541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.416730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.416757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.416778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.416792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.416822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.426566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.426745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.426772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.426786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.426798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.426829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.436565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.436733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.436759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.436774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.436786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.436816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.446555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.446728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.446754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.446769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.446781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.446810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.456594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.456769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.456794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.456809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.456821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.456851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.466622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.466791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.466817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.466831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.466843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.466885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.466 [2024-05-15 04:26:10.476704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.466 [2024-05-15 04:26:10.476904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.466 [2024-05-15 04:26:10.476936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.466 [2024-05-15 04:26:10.476953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.466 [2024-05-15 04:26:10.476966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.466 [2024-05-15 04:26:10.476995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.466 qpair failed and we were unable to recover it. 00:25:22.725 [2024-05-15 04:26:10.486747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.725 [2024-05-15 04:26:10.486916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.725 [2024-05-15 04:26:10.486949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.725 [2024-05-15 04:26:10.486964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.725 [2024-05-15 04:26:10.486976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.725 [2024-05-15 04:26:10.487006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.725 qpair failed and we were unable to recover it. 00:25:22.725 [2024-05-15 04:26:10.496747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.725 [2024-05-15 04:26:10.496944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.725 [2024-05-15 04:26:10.496970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.725 [2024-05-15 04:26:10.496984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.725 [2024-05-15 04:26:10.496996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.725 [2024-05-15 04:26:10.497025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.725 qpair failed and we were unable to recover it. 00:25:22.725 [2024-05-15 04:26:10.506731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.725 [2024-05-15 04:26:10.506893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.506923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.506948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.506961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.506990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.516786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.517006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.517033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.517051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.517064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.517093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.526910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.527088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.527114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.527129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.527141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.527170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.536838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.537010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.537036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.537050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.537062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.537091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.546859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.547032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.547058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.547072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.547084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.547120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.556912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.557124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.557150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.557164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.557176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.557206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.566972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.567146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.567173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.567192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.567205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.567235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.576977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.577176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.577202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.577217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.577228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.577258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.587028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.587232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.587258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.587273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.587284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.587314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.596999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.597174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.597205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.597221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.597233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.597262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.607025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.607190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.607216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.607231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.607243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.607272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.617063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.617279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.617305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.617319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.617331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.617360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.627224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.627426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.627453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.627468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.627480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.627509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.726 qpair failed and we were unable to recover it. 00:25:22.726 [2024-05-15 04:26:10.637140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.726 [2024-05-15 04:26:10.637344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.726 [2024-05-15 04:26:10.637371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.726 [2024-05-15 04:26:10.637386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.726 [2024-05-15 04:26:10.637403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.726 [2024-05-15 04:26:10.637439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.647174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.647354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.647380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.647395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.647407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.647436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.657190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.657364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.657389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.657404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.657416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.657445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.667199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.667377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.667403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.667417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.667430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.667459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.677235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.677398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.677424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.677439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.677451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.677493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.687286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.687503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.687537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.687553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.687565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.687595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.697292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.697468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.697494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.697508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.697520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.697549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.707306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.707485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.707509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.707524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.707536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.707565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.717359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.717598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.717624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.717638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.717650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.717679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.727411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.727645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.727671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.727686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.727704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.727746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.727 [2024-05-15 04:26:10.737464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.727 [2024-05-15 04:26:10.737645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.727 [2024-05-15 04:26:10.737671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.727 [2024-05-15 04:26:10.737690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.727 [2024-05-15 04:26:10.737702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.727 [2024-05-15 04:26:10.737733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.727 qpair failed and we were unable to recover it. 00:25:22.985 [2024-05-15 04:26:10.747465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.985 [2024-05-15 04:26:10.747673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.985 [2024-05-15 04:26:10.747701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.985 [2024-05-15 04:26:10.747716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.985 [2024-05-15 04:26:10.747728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.985 [2024-05-15 04:26:10.747758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.985 qpair failed and we were unable to recover it. 00:25:22.985 [2024-05-15 04:26:10.757509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.757687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.757714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.757728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.757744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.757773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.767530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.767700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.767726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.767741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.767753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.767782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.777550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.777770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.777796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.777810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.777822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.777851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.787544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.787718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.787743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.787758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.787770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.787799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.797656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.797839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.797867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.797885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.797897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.797928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.807728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.807910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.807943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.807959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.807972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.808001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.817682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.817915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.817951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.817973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.817986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.818016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.827656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.827824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.827850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.827865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.827877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.827906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.837692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.837860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.837886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.837901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.837913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.837967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.847795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.847970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.847996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.848011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.848023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.848052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.857778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.857961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.857987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.858002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.858014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.858043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.867769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.867945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.867972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.867986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.867998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.868029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.877890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.878076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.878103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.878117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.878129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.878159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.887908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.986 [2024-05-15 04:26:10.888082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.986 [2024-05-15 04:26:10.888108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.986 [2024-05-15 04:26:10.888122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.986 [2024-05-15 04:26:10.888134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.986 [2024-05-15 04:26:10.888163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.986 qpair failed and we were unable to recover it. 00:25:22.986 [2024-05-15 04:26:10.897874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.898056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.898082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.898096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.898108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.898137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.907910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.908087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.908113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.908134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.908147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.908177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.918017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.918198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.918224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.918239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.918251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.918281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.928009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.928222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.928248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.928263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.928275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.928304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.938025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.938240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.938266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.938281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.938293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.938322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.948023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.948198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.948224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.948239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.948251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.948280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.958063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.958229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.958256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.958270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.958282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.958312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.968053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.968223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.968250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.968265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.968277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.968305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.978129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.978303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.978329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.978343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.978355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.978385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.988126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.988297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.988323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.988338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.988351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.988393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:22.987 [2024-05-15 04:26:10.998160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.987 [2024-05-15 04:26:10.998331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.987 [2024-05-15 04:26:10.998363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.987 [2024-05-15 04:26:10.998378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.987 [2024-05-15 04:26:10.998390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:22.987 [2024-05-15 04:26:10.998420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.987 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.008168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.008388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.008415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.008430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.008442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.008485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.018304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.018503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.018529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.018543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.018556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.018586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.028222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.028388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.028415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.028429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.028442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.028471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.038287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.038455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.038480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.038494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.038506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.038541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.048293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.048467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.048493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.048508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.048519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.048549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.058304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.058500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.058525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.058540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.058552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.058581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.068328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.068496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.068522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.068536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.068548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.068590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.078367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.078575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.078601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.078616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.078628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.078657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.088394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.088561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.088592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.088608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.088620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.088649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.098528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.098707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.098733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.098747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.098759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.098788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.108449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.108685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.108711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.108726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.108738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.108767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.118454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.118616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.118641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.118656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.118668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.118697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.128503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.128670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.128695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.128709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.128727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.128756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.138543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.138715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.138741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.138755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.138767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.138796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.148583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.148751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.148778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.148792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.148805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.148834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.158663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.158846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.158871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.158886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.158897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.158927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.168642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.168819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.168845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.168859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.168871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.168900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.178656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.178837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.178863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.178881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.178893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.178923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.188663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.188860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.188886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.188901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.188913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.188949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.198682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.198846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.198872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.198886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.198898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.198928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.208721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.208906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.208940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.208957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.208969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.209011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.218757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.218938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.218965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.218986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.218999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.219029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.228787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.228963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.247 [2024-05-15 04:26:11.228989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.247 [2024-05-15 04:26:11.229004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.247 [2024-05-15 04:26:11.229016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.247 [2024-05-15 04:26:11.229058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.247 qpair failed and we were unable to recover it. 00:25:23.247 [2024-05-15 04:26:11.238823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.247 [2024-05-15 04:26:11.239003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.248 [2024-05-15 04:26:11.239029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.248 [2024-05-15 04:26:11.239043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.248 [2024-05-15 04:26:11.239055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.248 [2024-05-15 04:26:11.239085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.248 qpair failed and we were unable to recover it. 00:25:23.248 [2024-05-15 04:26:11.248826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.248 [2024-05-15 04:26:11.249006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.248 [2024-05-15 04:26:11.249032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.248 [2024-05-15 04:26:11.249047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.248 [2024-05-15 04:26:11.249059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.248 [2024-05-15 04:26:11.249088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.248 qpair failed and we were unable to recover it. 00:25:23.248 [2024-05-15 04:26:11.258898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.248 [2024-05-15 04:26:11.259105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.248 [2024-05-15 04:26:11.259131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.248 [2024-05-15 04:26:11.259145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.248 [2024-05-15 04:26:11.259157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.248 [2024-05-15 04:26:11.259187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.248 qpair failed and we were unable to recover it. 00:25:23.507 [2024-05-15 04:26:11.268876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.507 [2024-05-15 04:26:11.269045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.507 [2024-05-15 04:26:11.269071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.507 [2024-05-15 04:26:11.269086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.507 [2024-05-15 04:26:11.269098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.507 [2024-05-15 04:26:11.269127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.507 qpair failed and we were unable to recover it. 00:25:23.507 [2024-05-15 04:26:11.278907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.507 [2024-05-15 04:26:11.279132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.507 [2024-05-15 04:26:11.279157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.507 [2024-05-15 04:26:11.279172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.507 [2024-05-15 04:26:11.279184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.507 [2024-05-15 04:26:11.279214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.507 qpair failed and we were unable to recover it. 00:25:23.507 [2024-05-15 04:26:11.288953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.507 [2024-05-15 04:26:11.289122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.507 [2024-05-15 04:26:11.289148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.507 [2024-05-15 04:26:11.289162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.507 [2024-05-15 04:26:11.289175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.507 [2024-05-15 04:26:11.289204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.507 qpair failed and we were unable to recover it. 00:25:23.507 [2024-05-15 04:26:11.298980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.507 [2024-05-15 04:26:11.299159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.507 [2024-05-15 04:26:11.299184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.507 [2024-05-15 04:26:11.299198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.507 [2024-05-15 04:26:11.299210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.507 [2024-05-15 04:26:11.299239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.507 qpair failed and we were unable to recover it. 00:25:23.507 [2024-05-15 04:26:11.309048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.507 [2024-05-15 04:26:11.309240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.507 [2024-05-15 04:26:11.309265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.507 [2024-05-15 04:26:11.309285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.507 [2024-05-15 04:26:11.309298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.507 [2024-05-15 04:26:11.309327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.507 qpair failed and we were unable to recover it. 00:25:23.507 [2024-05-15 04:26:11.319034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.507 [2024-05-15 04:26:11.319206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.507 [2024-05-15 04:26:11.319232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.507 [2024-05-15 04:26:11.319246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.507 [2024-05-15 04:26:11.319259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.507 [2024-05-15 04:26:11.319288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.507 qpair failed and we were unable to recover it. 00:25:23.507 [2024-05-15 04:26:11.329056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.507 [2024-05-15 04:26:11.329226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.507 [2024-05-15 04:26:11.329253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.507 [2024-05-15 04:26:11.329267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.507 [2024-05-15 04:26:11.329283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.507 [2024-05-15 04:26:11.329312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.339126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.339305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.339332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.339346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.339358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.339401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.349120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.349307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.349332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.349346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.349358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.349388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.359130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.359302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.359328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.359342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.359354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.359383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.369156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.369329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.369355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.369369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.369381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.369410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.379216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.379414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.379439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.379453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.379465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.379494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.389287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.389495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.389521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.389535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.389548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.389576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.399380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.399547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.399580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.399599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.399612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.399642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.409425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.409643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.409669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.409684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.409696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.409725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.419362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.419612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.419637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.419652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.419664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.419693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.429334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.429552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.429578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.429592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.429604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.429633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.439403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.439574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.439600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.439615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.439628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.439665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.449423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.449612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.449638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.449654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.449667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.449696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.459507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.459682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.459708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.459722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.459734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.508 [2024-05-15 04:26:11.459764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.508 qpair failed and we were unable to recover it. 00:25:23.508 [2024-05-15 04:26:11.469461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.508 [2024-05-15 04:26:11.469631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.508 [2024-05-15 04:26:11.469658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.508 [2024-05-15 04:26:11.469672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.508 [2024-05-15 04:26:11.469684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.509 [2024-05-15 04:26:11.469713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.509 qpair failed and we were unable to recover it. 00:25:23.509 [2024-05-15 04:26:11.479561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.509 [2024-05-15 04:26:11.479728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.509 [2024-05-15 04:26:11.479754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.509 [2024-05-15 04:26:11.479768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.509 [2024-05-15 04:26:11.479780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.509 [2024-05-15 04:26:11.479808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.509 qpair failed and we were unable to recover it. 00:25:23.509 [2024-05-15 04:26:11.489529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.509 [2024-05-15 04:26:11.489736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.509 [2024-05-15 04:26:11.489768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.509 [2024-05-15 04:26:11.489783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.509 [2024-05-15 04:26:11.489795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.509 [2024-05-15 04:26:11.489837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.509 qpair failed and we were unable to recover it. 00:25:23.509 [2024-05-15 04:26:11.499535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.509 [2024-05-15 04:26:11.499719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.509 [2024-05-15 04:26:11.499745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.509 [2024-05-15 04:26:11.499759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.509 [2024-05-15 04:26:11.499771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.509 [2024-05-15 04:26:11.499800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.509 qpair failed and we were unable to recover it. 00:25:23.509 [2024-05-15 04:26:11.509674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.509 [2024-05-15 04:26:11.509853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.509 [2024-05-15 04:26:11.509880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.509 [2024-05-15 04:26:11.509894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.509 [2024-05-15 04:26:11.509907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.509 [2024-05-15 04:26:11.509943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.509 qpair failed and we were unable to recover it. 00:25:23.509 [2024-05-15 04:26:11.519696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.509 [2024-05-15 04:26:11.519875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.509 [2024-05-15 04:26:11.519901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.509 [2024-05-15 04:26:11.519920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.509 [2024-05-15 04:26:11.519941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.509 [2024-05-15 04:26:11.519973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.509 qpair failed and we were unable to recover it. 00:25:23.768 [2024-05-15 04:26:11.529702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.768 [2024-05-15 04:26:11.529867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.768 [2024-05-15 04:26:11.529894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.768 [2024-05-15 04:26:11.529908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.768 [2024-05-15 04:26:11.529926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.768 [2024-05-15 04:26:11.529966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.768 qpair failed and we were unable to recover it. 00:25:23.768 [2024-05-15 04:26:11.539695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.768 [2024-05-15 04:26:11.539914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.768 [2024-05-15 04:26:11.539948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.768 [2024-05-15 04:26:11.539963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.768 [2024-05-15 04:26:11.539975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.768 [2024-05-15 04:26:11.540006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.768 qpair failed and we were unable to recover it. 00:25:23.768 [2024-05-15 04:26:11.549693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.768 [2024-05-15 04:26:11.549861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.768 [2024-05-15 04:26:11.549887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.768 [2024-05-15 04:26:11.549902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.768 [2024-05-15 04:26:11.549913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.768 [2024-05-15 04:26:11.549951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.768 qpair failed and we were unable to recover it. 00:25:23.768 [2024-05-15 04:26:11.559753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.768 [2024-05-15 04:26:11.559942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.768 [2024-05-15 04:26:11.559969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.768 [2024-05-15 04:26:11.559983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.768 [2024-05-15 04:26:11.559995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.768 [2024-05-15 04:26:11.560038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.768 qpair failed and we were unable to recover it. 00:25:23.768 [2024-05-15 04:26:11.569722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.768 [2024-05-15 04:26:11.569887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.768 [2024-05-15 04:26:11.569914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.768 [2024-05-15 04:26:11.569935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.768 [2024-05-15 04:26:11.569950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.768 [2024-05-15 04:26:11.569980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.768 qpair failed and we were unable to recover it. 00:25:23.768 [2024-05-15 04:26:11.579775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.768 [2024-05-15 04:26:11.579959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.768 [2024-05-15 04:26:11.579985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.768 [2024-05-15 04:26:11.580000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.768 [2024-05-15 04:26:11.580012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.768 [2024-05-15 04:26:11.580041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.768 qpair failed and we were unable to recover it. 00:25:23.768 [2024-05-15 04:26:11.589788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.768 [2024-05-15 04:26:11.589958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.589984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.589999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.590010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.769 [2024-05-15 04:26:11.590040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.599815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.599985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.600012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.600026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.600039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.769 [2024-05-15 04:26:11.600082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.609833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.610000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.610026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.610041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.610053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.769 [2024-05-15 04:26:11.610083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.619881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.620099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.620125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.620140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.620158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.769 [2024-05-15 04:26:11.620189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.630011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.630181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.630208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.630222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.630234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:23.769 [2024-05-15 04:26:11.630263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.639979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.640156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.640189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.640205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.640219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.769 [2024-05-15 04:26:11.640263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.650007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.650178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.650205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.650221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.650233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.769 [2024-05-15 04:26:11.650263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.660029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.660199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.660225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.660240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.660252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.769 [2024-05-15 04:26:11.660281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.670082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.670250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.670277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.670291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.670304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.769 [2024-05-15 04:26:11.670333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.680073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.680270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.680296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.680310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.680322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.769 [2024-05-15 04:26:11.680350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.690176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.690381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.690407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.690421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.690433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.769 [2024-05-15 04:26:11.690462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.769 qpair failed and we were unable to recover it. 00:25:23.769 [2024-05-15 04:26:11.700212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.769 [2024-05-15 04:26:11.700415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.769 [2024-05-15 04:26:11.700441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.769 [2024-05-15 04:26:11.700455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.769 [2024-05-15 04:26:11.700468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.769 [2024-05-15 04:26:11.700497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.710196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.710408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.710434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.710455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.710468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.710497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.720174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.720342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.720368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.720383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.720395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.720424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.730230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.730409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.730434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.730449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.730461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.730490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.740228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.740436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.740462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.740477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.740489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.740531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.750253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.750427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.750453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.750467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.750479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.750508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.760287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.760527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.760554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.760568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.760580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.760622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.770299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.770462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.770488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.770502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.770514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.770544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:23.770 [2024-05-15 04:26:11.780340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.770 [2024-05-15 04:26:11.780546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.770 [2024-05-15 04:26:11.780572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.770 [2024-05-15 04:26:11.780586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.770 [2024-05-15 04:26:11.780598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:23.770 [2024-05-15 04:26:11.780640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.770 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.790436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.790602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.030 [2024-05-15 04:26:11.790628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.030 [2024-05-15 04:26:11.790643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.030 [2024-05-15 04:26:11.790654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.030 [2024-05-15 04:26:11.790684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.030 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.800377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.800548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.030 [2024-05-15 04:26:11.800583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.030 [2024-05-15 04:26:11.800598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.030 [2024-05-15 04:26:11.800611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.030 [2024-05-15 04:26:11.800640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.030 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.810395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.810559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.030 [2024-05-15 04:26:11.810585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.030 [2024-05-15 04:26:11.810600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.030 [2024-05-15 04:26:11.810612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.030 [2024-05-15 04:26:11.810640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.030 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.820432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.820602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.030 [2024-05-15 04:26:11.820628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.030 [2024-05-15 04:26:11.820642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.030 [2024-05-15 04:26:11.820654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.030 [2024-05-15 04:26:11.820696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.030 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.830473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.830688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.030 [2024-05-15 04:26:11.830714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.030 [2024-05-15 04:26:11.830728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.030 [2024-05-15 04:26:11.830740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.030 [2024-05-15 04:26:11.830770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.030 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.840546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.840720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.030 [2024-05-15 04:26:11.840746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.030 [2024-05-15 04:26:11.840760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.030 [2024-05-15 04:26:11.840772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.030 [2024-05-15 04:26:11.840807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.030 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.850524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.850687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.030 [2024-05-15 04:26:11.850714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.030 [2024-05-15 04:26:11.850728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.030 [2024-05-15 04:26:11.850741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.030 [2024-05-15 04:26:11.850783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.030 qpair failed and we were unable to recover it. 00:25:24.030 [2024-05-15 04:26:11.860542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.030 [2024-05-15 04:26:11.860712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.860737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.860751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.860763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.860792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.870619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.870789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.870815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.870829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.870842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.870871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.880602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.880783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.880809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.880824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.880836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.880865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.890713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.890880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.890911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.890927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.890949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.890979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.900701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.900878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.900903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.900917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.900936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.900968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.910690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.910882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.910908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.910922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.910943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.910974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.920714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.920887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.920913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.920927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.920947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.920977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.930772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.930950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.930975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.930990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.931002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.931037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.940805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.940990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.941017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.941031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.941043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.941072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.950836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.951029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.951055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.951069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.951082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.951110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.960919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.961092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.961117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.961132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.961144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.961173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.970860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.971041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.971068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.971083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.971095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.971124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.980902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.981126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.981152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.981167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.981179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.981208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.031 qpair failed and we were unable to recover it. 00:25:24.031 [2024-05-15 04:26:11.990895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.031 [2024-05-15 04:26:11.991071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.031 [2024-05-15 04:26:11.991096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.031 [2024-05-15 04:26:11.991111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.031 [2024-05-15 04:26:11.991123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.031 [2024-05-15 04:26:11.991152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.032 qpair failed and we were unable to recover it. 00:25:24.032 [2024-05-15 04:26:12.000947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.032 [2024-05-15 04:26:12.001116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.032 [2024-05-15 04:26:12.001142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.032 [2024-05-15 04:26:12.001156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.032 [2024-05-15 04:26:12.001168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.032 [2024-05-15 04:26:12.001197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.032 qpair failed and we were unable to recover it. 00:25:24.032 [2024-05-15 04:26:12.010966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.032 [2024-05-15 04:26:12.011131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.032 [2024-05-15 04:26:12.011157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.032 [2024-05-15 04:26:12.011171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.032 [2024-05-15 04:26:12.011183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.032 [2024-05-15 04:26:12.011212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.032 qpair failed and we were unable to recover it. 00:25:24.032 [2024-05-15 04:26:12.021012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.032 [2024-05-15 04:26:12.021186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.032 [2024-05-15 04:26:12.021211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.032 [2024-05-15 04:26:12.021226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.032 [2024-05-15 04:26:12.021244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.032 [2024-05-15 04:26:12.021274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.032 qpair failed and we were unable to recover it. 00:25:24.032 [2024-05-15 04:26:12.031040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.032 [2024-05-15 04:26:12.031217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.032 [2024-05-15 04:26:12.031241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.032 [2024-05-15 04:26:12.031256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.032 [2024-05-15 04:26:12.031268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.032 [2024-05-15 04:26:12.031297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.032 qpair failed and we were unable to recover it. 00:25:24.032 [2024-05-15 04:26:12.041070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.032 [2024-05-15 04:26:12.041242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.032 [2024-05-15 04:26:12.041266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.032 [2024-05-15 04:26:12.041279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.032 [2024-05-15 04:26:12.041291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.032 [2024-05-15 04:26:12.041321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.032 qpair failed and we were unable to recover it. 00:25:24.292 [2024-05-15 04:26:12.051079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.292 [2024-05-15 04:26:12.051249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.292 [2024-05-15 04:26:12.051274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.292 [2024-05-15 04:26:12.051288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.292 [2024-05-15 04:26:12.051300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.292 [2024-05-15 04:26:12.051330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.292 qpair failed and we were unable to recover it. 00:25:24.292 [2024-05-15 04:26:12.061110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.292 [2024-05-15 04:26:12.061292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.292 [2024-05-15 04:26:12.061317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.292 [2024-05-15 04:26:12.061331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.292 [2024-05-15 04:26:12.061343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.292 [2024-05-15 04:26:12.061372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.292 qpair failed and we were unable to recover it. 00:25:24.292 [2024-05-15 04:26:12.071148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.292 [2024-05-15 04:26:12.071315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.292 [2024-05-15 04:26:12.071341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.292 [2024-05-15 04:26:12.071355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.292 [2024-05-15 04:26:12.071367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.292 [2024-05-15 04:26:12.071396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.292 qpair failed and we were unable to recover it. 00:25:24.292 [2024-05-15 04:26:12.081180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.292 [2024-05-15 04:26:12.081348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.292 [2024-05-15 04:26:12.081374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.292 [2024-05-15 04:26:12.081389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.292 [2024-05-15 04:26:12.081401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.292 [2024-05-15 04:26:12.081430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.292 qpair failed and we were unable to recover it. 00:25:24.292 [2024-05-15 04:26:12.091209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.293 [2024-05-15 04:26:12.091380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.293 [2024-05-15 04:26:12.091405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.293 [2024-05-15 04:26:12.091420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.293 [2024-05-15 04:26:12.091433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.293 [2024-05-15 04:26:12.091463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.293 qpair failed and we were unable to recover it. 00:25:24.293 [2024-05-15 04:26:12.101234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.293 [2024-05-15 04:26:12.101417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.293 [2024-05-15 04:26:12.101443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.293 [2024-05-15 04:26:12.101458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.293 [2024-05-15 04:26:12.101470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a4c000b90 00:25:24.293 [2024-05-15 04:26:12.101500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.293 qpair failed and we were unable to recover it. 00:25:24.293 [2024-05-15 04:26:12.111255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.293 [2024-05-15 04:26:12.111423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.293 [2024-05-15 04:26:12.111455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.293 [2024-05-15 04:26:12.111477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.293 [2024-05-15 04:26:12.111490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:24.293 [2024-05-15 04:26:12.111521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:24.293 qpair failed and we were unable to recover it. 00:25:24.293 [2024-05-15 04:26:12.121364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.293 [2024-05-15 04:26:12.121531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.293 [2024-05-15 04:26:12.121559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.293 [2024-05-15 04:26:12.121574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.293 [2024-05-15 04:26:12.121586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a44000b90 00:25:24.293 [2024-05-15 04:26:12.121616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:24.293 qpair failed and we were unable to recover it. 00:25:24.293 [2024-05-15 04:26:12.121738] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:24.293 A controller has encountered a failure and is being reset. 00:25:24.293 Controller properly reset. 00:25:24.293 Initializing NVMe Controllers 00:25:24.293 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:24.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:24.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:24.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:24.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:24.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:24.293 Initialization complete. Launching workers. 00:25:24.293 Starting thread on core 1 00:25:24.293 Starting thread on core 2 00:25:24.293 Starting thread on core 3 00:25:24.293 Starting thread on core 0 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:25:24.293 00:25:24.293 real 0m10.669s 00:25:24.293 user 0m16.290s 00:25:24.293 sys 0m5.651s 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:24.293 ************************************ 00:25:24.293 END TEST nvmf_target_disconnect_tc2 00:25:24.293 ************************************ 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.293 rmmod nvme_tcp 00:25:24.293 rmmod nvme_fabrics 00:25:24.293 rmmod nvme_keyring 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3492536 ']' 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3492536 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3492536 ']' 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3492536 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3492536 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3492536' 00:25:24.293 killing process with pid 3492536 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3492536 00:25:24.293 [2024-05-15 04:26:12.268768] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:24.293 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3492536 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.861 04:26:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.768 04:26:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:26.768 00:25:26.768 real 0m15.942s 00:25:26.768 user 0m41.888s 00:25:26.768 sys 0m8.038s 00:25:26.768 04:26:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:26.768 04:26:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:26.768 ************************************ 00:25:26.768 END TEST nvmf_target_disconnect 00:25:26.768 ************************************ 00:25:26.768 04:26:14 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:25:26.768 04:26:14 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.768 04:26:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.768 04:26:14 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:26.768 00:25:26.768 real 19m55.222s 00:25:26.768 user 46m44.120s 00:25:26.768 sys 5m9.341s 00:25:26.768 04:26:14 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:26.768 04:26:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.768 ************************************ 00:25:26.768 END TEST nvmf_tcp 00:25:26.768 ************************************ 00:25:26.768 04:26:14 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:25:26.768 04:26:14 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:26.768 04:26:14 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:26.768 04:26:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:26.768 04:26:14 -- common/autotest_common.sh@10 -- # set +x 00:25:26.768 ************************************ 00:25:26.768 START TEST spdkcli_nvmf_tcp 00:25:26.768 ************************************ 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:26.768 * Looking for test storage... 00:25:26.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.768 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.027 04:26:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3493613 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3493613 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3493613 ']' 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:27.028 04:26:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.028 [2024-05-15 04:26:14.838169] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:25:27.028 [2024-05-15 04:26:14.838291] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493613 ] 00:25:27.028 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.028 [2024-05-15 04:26:14.917396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:27.028 [2024-05-15 04:26:15.036581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.028 [2024-05-15 04:26:15.036586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.962 04:26:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:27.962 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:27.962 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:27.962 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:27.962 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:27.962 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:27.963 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:27.963 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:27.963 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:27.963 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:27.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:27.963 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:27.963 ' 00:25:30.494 [2024-05-15 04:26:18.334159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.871 [2024-05-15 04:26:19.557910] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:31.871 [2024-05-15 04:26:19.558552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:34.400 [2024-05-15 04:26:21.877787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:36.298 [2024-05-15 04:26:23.819682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:37.668 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:37.668 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:37.668 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:37.668 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:37.668 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:37.668 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:37.668 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:37.668 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:37.669 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:37.669 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:37.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:37.669 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:37.669 04:26:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.927 04:26:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:37.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:37.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:37.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:37.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:37.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:37.927 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:37.927 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:37.927 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:37.927 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:37.927 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:37.927 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:37.927 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:37.927 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:37.927 ' 00:25:43.187 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:43.187 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:43.187 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:43.187 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:43.187 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:43.187 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:43.187 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:43.187 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:43.187 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:43.187 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:43.187 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:43.187 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:43.187 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:43.187 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3493613 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3493613 ']' 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3493613 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3493613 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3493613' 00:25:43.187 killing process with pid 3493613 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3493613 00:25:43.187 [2024-05-15 04:26:31.156255] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:43.187 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3493613 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3493613 ']' 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3493613 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3493613 ']' 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3493613 00:25:43.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3493613) - No such process 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3493613 is not found' 00:25:43.480 Process with pid 3493613 is not found 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:43.480 00:25:43.480 real 0m16.693s 00:25:43.480 user 0m35.266s 00:25:43.480 sys 0m0.832s 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:43.480 04:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:43.480 ************************************ 00:25:43.480 END TEST spdkcli_nvmf_tcp 00:25:43.480 ************************************ 00:25:43.480 04:26:31 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:43.480 04:26:31 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:43.480 04:26:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:43.480 04:26:31 -- common/autotest_common.sh@10 -- # set +x 00:25:43.480 ************************************ 00:25:43.480 START TEST nvmf_identify_passthru 00:25:43.480 ************************************ 00:25:43.480 04:26:31 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:43.738 * Looking for test storage... 00:25:43.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:43.738 04:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.738 04:26:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.738 04:26:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.738 04:26:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.738 04:26:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.738 04:26:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.738 04:26:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.738 04:26:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:43.738 04:26:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:43.738 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:43.738 04:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.738 04:26:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.738 04:26:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.738 04:26:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.739 04:26:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.739 04:26:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.739 04:26:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.739 04:26:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:43.739 04:26:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.739 04:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.739 04:26:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:43.739 04:26:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:43.739 04:26:31 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:43.739 04:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.271 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.271 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.271 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.271 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:25:46.271 00:25:46.271 --- 10.0.0.2 ping statistics --- 00:25:46.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.271 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:25:46.271 00:25:46.271 --- 10.0.0.1 ping statistics --- 00:25:46.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.271 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.271 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.272 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.272 04:26:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:46.272 04:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:46.272 04:26:33 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:46.272 04:26:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.272 04:26:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:25:46.272 04:26:34 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:25:46.272 04:26:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:46.272 04:26:34 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:46.272 04:26:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:46.272 04:26:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:46.272 04:26:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:46.272 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.456 04:26:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:50.456 04:26:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:50.456 04:26:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:50.456 04:26:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:50.456 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.640 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:54.640 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:54.640 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:54.640 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3498651 00:25:54.640 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:54.640 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:54.640 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3498651 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3498651 ']' 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:54.640 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:54.640 [2024-05-15 04:26:42.572945] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:25:54.640 [2024-05-15 04:26:42.573026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.640 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.640 [2024-05-15 04:26:42.647068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:54.898 [2024-05-15 04:26:42.753858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.898 [2024-05-15 04:26:42.753909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.898 [2024-05-15 04:26:42.753942] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.898 [2024-05-15 04:26:42.753954] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.898 [2024-05-15 04:26:42.753964] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.898 [2024-05-15 04:26:42.754030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.899 [2024-05-15 04:26:42.754095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.899 [2024-05-15 04:26:42.754163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.899 [2024-05-15 04:26:42.754161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:25:54.899 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:54.899 INFO: Log level set to 20 00:25:54.899 INFO: Requests: 00:25:54.899 { 00:25:54.899 "jsonrpc": "2.0", 00:25:54.899 "method": "nvmf_set_config", 00:25:54.899 "id": 1, 00:25:54.899 "params": { 00:25:54.899 "admin_cmd_passthru": { 00:25:54.899 "identify_ctrlr": true 00:25:54.899 } 00:25:54.899 } 00:25:54.899 } 00:25:54.899 00:25:54.899 INFO: response: 00:25:54.899 { 00:25:54.899 "jsonrpc": "2.0", 00:25:54.899 "id": 1, 00:25:54.899 "result": true 00:25:54.899 } 00:25:54.899 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.899 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:54.899 INFO: Setting log level to 20 00:25:54.899 INFO: Setting log level to 20 00:25:54.899 INFO: Log level set to 20 00:25:54.899 INFO: Log level set to 20 00:25:54.899 INFO: Requests: 00:25:54.899 { 00:25:54.899 "jsonrpc": "2.0", 00:25:54.899 "method": "framework_start_init", 00:25:54.899 "id": 1 00:25:54.899 } 00:25:54.899 00:25:54.899 INFO: Requests: 00:25:54.899 { 00:25:54.899 "jsonrpc": "2.0", 00:25:54.899 "method": "framework_start_init", 00:25:54.899 "id": 1 00:25:54.899 } 00:25:54.899 00:25:54.899 [2024-05-15 04:26:42.877145] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:54.899 INFO: response: 00:25:54.899 { 00:25:54.899 "jsonrpc": "2.0", 00:25:54.899 "id": 1, 00:25:54.899 "result": true 00:25:54.899 } 00:25:54.899 00:25:54.899 INFO: response: 00:25:54.899 { 00:25:54.899 "jsonrpc": "2.0", 00:25:54.899 "id": 1, 00:25:54.899 "result": true 00:25:54.899 } 00:25:54.899 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.899 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:54.899 INFO: Setting log level to 40 00:25:54.899 INFO: Setting log level to 40 00:25:54.899 INFO: Setting log level to 40 00:25:54.899 [2024-05-15 04:26:42.887127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.899 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.899 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:55.156 04:26:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:55.156 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.156 04:26:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.434 Nvme0n1 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.434 [2024-05-15 04:26:45.787062] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:58.434 [2024-05-15 04:26:45.787384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.434 [ 00:25:58.434 { 00:25:58.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:58.434 "subtype": "Discovery", 00:25:58.434 "listen_addresses": [], 00:25:58.434 "allow_any_host": true, 00:25:58.434 "hosts": [] 00:25:58.434 }, 00:25:58.434 { 00:25:58.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.434 "subtype": "NVMe", 00:25:58.434 "listen_addresses": [ 00:25:58.434 { 00:25:58.434 "trtype": "TCP", 00:25:58.434 "adrfam": "IPv4", 00:25:58.434 "traddr": "10.0.0.2", 00:25:58.434 "trsvcid": "4420" 00:25:58.434 } 00:25:58.434 ], 00:25:58.434 "allow_any_host": true, 00:25:58.434 "hosts": [], 00:25:58.434 "serial_number": "SPDK00000000000001", 00:25:58.434 "model_number": "SPDK bdev Controller", 00:25:58.434 "max_namespaces": 1, 00:25:58.434 "min_cntlid": 1, 00:25:58.434 "max_cntlid": 65519, 00:25:58.434 "namespaces": [ 00:25:58.434 { 00:25:58.434 "nsid": 1, 00:25:58.434 "bdev_name": "Nvme0n1", 00:25:58.434 "name": "Nvme0n1", 00:25:58.434 "nguid": "2FD79E48CF9D42FB97F912FA5976085F", 00:25:58.434 "uuid": "2fd79e48-cf9d-42fb-97f9-12fa5976085f" 00:25:58.434 } 00:25:58.434 ] 00:25:58.434 } 00:25:58.434 ] 00:25:58.434 04:26:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:58.434 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:58.434 04:26:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:58.434 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.434 04:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:58.434 04:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:58.434 04:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:58.434 04:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.434 04:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:58.434 04:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.434 rmmod nvme_tcp 00:25:58.434 rmmod nvme_fabrics 00:25:58.434 rmmod nvme_keyring 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3498651 ']' 00:25:58.434 04:26:46 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3498651 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3498651 ']' 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3498651 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3498651 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3498651' 00:25:58.434 killing process with pid 3498651 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3498651 00:25:58.434 [2024-05-15 04:26:46.127036] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:58.434 04:26:46 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3498651 00:25:59.806 04:26:47 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:59.806 04:26:47 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:59.806 04:26:47 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:59.806 04:26:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.806 04:26:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.806 04:26:47 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.806 04:26:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:59.806 04:26:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.339 04:26:49 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:02.339 00:26:02.339 real 0m18.318s 00:26:02.339 user 0m26.350s 00:26:02.339 sys 0m2.605s 00:26:02.339 04:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:02.339 04:26:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:02.339 ************************************ 00:26:02.339 END TEST nvmf_identify_passthru 00:26:02.339 ************************************ 00:26:02.339 04:26:49 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:02.339 04:26:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:02.339 04:26:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:02.339 04:26:49 -- common/autotest_common.sh@10 -- # set +x 00:26:02.339 ************************************ 00:26:02.339 START TEST nvmf_dif 00:26:02.339 ************************************ 00:26:02.339 04:26:49 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:02.339 * Looking for test storage... 00:26:02.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.339 04:26:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.339 04:26:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.340 04:26:49 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.340 04:26:49 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.340 04:26:49 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.340 04:26:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 04:26:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 04:26:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 04:26:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:02.340 04:26:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:02.340 04:26:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:02.340 04:26:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:02.340 04:26:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:02.340 04:26:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:02.340 04:26:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.340 04:26:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:02.340 04:26:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:02.340 04:26:49 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.340 04:26:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:26:04.870 00:26:04.870 --- 10.0.0.2 ping statistics --- 00:26:04.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.870 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:26:04.870 04:26:52 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:26:04.871 00:26:04.871 --- 10.0.0.1 ping statistics --- 00:26:04.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.871 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:26:04.871 04:26:52 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.871 04:26:52 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:04.871 04:26:52 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:04.871 04:26:52 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:05.805 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:05.805 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:05.805 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:05.805 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:05.805 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:05.805 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:05.805 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:05.805 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:05.805 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:05.805 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:05.805 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:05.805 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:05.805 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:05.805 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:05.805 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:05.805 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:05.805 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:05.805 04:26:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:05.805 04:26:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3502304 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:05.805 04:26:53 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3502304 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3502304 ']' 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:05.805 04:26:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:05.805 [2024-05-15 04:26:53.797607] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:26:05.805 [2024-05-15 04:26:53.797707] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.064 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.064 [2024-05-15 04:26:53.878913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.064 [2024-05-15 04:26:53.993959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.064 [2024-05-15 04:26:53.994035] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.064 [2024-05-15 04:26:53.994062] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.064 [2024-05-15 04:26:53.994075] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.064 [2024-05-15 04:26:53.994087] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.064 [2024-05-15 04:26:53.994119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:26:06.996 04:26:54 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 04:26:54 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.996 04:26:54 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:06.996 04:26:54 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 [2024-05-15 04:26:54.802692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.996 04:26:54 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:06.996 04:26:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 ************************************ 00:26:06.996 START TEST fio_dif_1_default 00:26:06.996 ************************************ 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 bdev_null0 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.996 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:06.997 [2024-05-15 04:26:54.866773] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:06.997 [2024-05-15 04:26:54.867056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.997 { 00:26:06.997 "params": { 00:26:06.997 "name": "Nvme$subsystem", 00:26:06.997 "trtype": "$TEST_TRANSPORT", 00:26:06.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.997 "adrfam": "ipv4", 00:26:06.997 "trsvcid": "$NVMF_PORT", 00:26:06.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.997 "hdgst": ${hdgst:-false}, 00:26:06.997 "ddgst": ${ddgst:-false} 00:26:06.997 }, 00:26:06.997 "method": "bdev_nvme_attach_controller" 00:26:06.997 } 00:26:06.997 EOF 00:26:06.997 )") 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:06.997 "params": { 00:26:06.997 "name": "Nvme0", 00:26:06.997 "trtype": "tcp", 00:26:06.997 "traddr": "10.0.0.2", 00:26:06.997 "adrfam": "ipv4", 00:26:06.997 "trsvcid": "4420", 00:26:06.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.997 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.997 "hdgst": false, 00:26:06.997 "ddgst": false 00:26:06.997 }, 00:26:06.997 "method": "bdev_nvme_attach_controller" 00:26:06.997 }' 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:06.997 04:26:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.255 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:07.255 fio-3.35 00:26:07.255 Starting 1 thread 00:26:07.255 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.446 00:26:19.446 filename0: (groupid=0, jobs=1): err= 0: pid=3502537: Wed May 15 04:27:05 2024 00:26:19.446 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10021msec) 00:26:19.446 slat (nsec): min=6507, max=71592, avg=9510.60, stdev=5174.95 00:26:19.446 clat (usec): min=1014, max=44018, avg=21518.69, stdev=20368.08 00:26:19.446 lat (usec): min=1023, max=44068, avg=21528.20, stdev=20366.87 00:26:19.446 clat percentiles (usec): 00:26:19.446 | 1.00th=[ 1037], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1106], 00:26:19.446 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[41681], 60.00th=[41681], 00:26:19.446 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:26:19.446 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:26:19.446 | 99.99th=[43779] 00:26:19.446 bw ( KiB/s): min= 704, max= 768, per=99.94%, avg=742.40, stdev=30.45, samples=20 00:26:19.446 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:26:19.446 lat (msec) : 2=49.89%, 50=50.11% 00:26:19.446 cpu : usr=89.94%, sys=9.76%, ctx=17, majf=0, minf=268 00:26:19.446 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.446 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.446 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:19.446 00:26:19.446 Run status group 0 (all jobs): 00:26:19.446 READ: bw=742KiB/s (760kB/s), 742KiB/s-742KiB/s (760kB/s-760kB/s), io=7440KiB (7619kB), run=10021-10021msec 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 00:26:19.446 real 0m11.189s 00:26:19.446 user 0m10.113s 00:26:19.446 sys 0m1.264s 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 ************************************ 00:26:19.446 END TEST fio_dif_1_default 00:26:19.446 ************************************ 00:26:19.446 04:27:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:19.446 04:27:06 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:19.446 04:27:06 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 ************************************ 00:26:19.446 START TEST fio_dif_1_multi_subsystems 00:26:19.446 ************************************ 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 bdev_null0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 [2024-05-15 04:27:06.114549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 bdev_null1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:19.446 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:19.447 { 00:26:19.447 "params": { 00:26:19.447 "name": "Nvme$subsystem", 00:26:19.447 "trtype": "$TEST_TRANSPORT", 00:26:19.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.447 "adrfam": "ipv4", 00:26:19.447 "trsvcid": "$NVMF_PORT", 00:26:19.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.447 "hdgst": ${hdgst:-false}, 00:26:19.447 "ddgst": ${ddgst:-false} 00:26:19.447 }, 00:26:19.447 "method": "bdev_nvme_attach_controller" 00:26:19.447 } 00:26:19.447 EOF 00:26:19.447 )") 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:19.447 { 00:26:19.447 "params": { 00:26:19.447 "name": "Nvme$subsystem", 00:26:19.447 "trtype": "$TEST_TRANSPORT", 00:26:19.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.447 "adrfam": "ipv4", 00:26:19.447 "trsvcid": "$NVMF_PORT", 00:26:19.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.447 "hdgst": ${hdgst:-false}, 00:26:19.447 "ddgst": ${ddgst:-false} 00:26:19.447 }, 00:26:19.447 "method": "bdev_nvme_attach_controller" 00:26:19.447 } 00:26:19.447 EOF 00:26:19.447 )") 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:19.447 "params": { 00:26:19.447 "name": "Nvme0", 00:26:19.447 "trtype": "tcp", 00:26:19.447 "traddr": "10.0.0.2", 00:26:19.447 "adrfam": "ipv4", 00:26:19.447 "trsvcid": "4420", 00:26:19.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.447 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:19.447 "hdgst": false, 00:26:19.447 "ddgst": false 00:26:19.447 }, 00:26:19.447 "method": "bdev_nvme_attach_controller" 00:26:19.447 },{ 00:26:19.447 "params": { 00:26:19.447 "name": "Nvme1", 00:26:19.447 "trtype": "tcp", 00:26:19.447 "traddr": "10.0.0.2", 00:26:19.447 "adrfam": "ipv4", 00:26:19.447 "trsvcid": "4420", 00:26:19.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:19.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:19.447 "hdgst": false, 00:26:19.447 "ddgst": false 00:26:19.447 }, 00:26:19.447 "method": "bdev_nvme_attach_controller" 00:26:19.447 }' 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:19.447 04:27:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.447 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:19.447 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:19.447 fio-3.35 00:26:19.447 Starting 2 threads 00:26:19.447 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.409 00:26:29.409 filename0: (groupid=0, jobs=1): err= 0: pid=3504143: Wed May 15 04:27:17 2024 00:26:29.409 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:26:29.409 slat (nsec): min=7272, max=62651, avg=10689.98, stdev=5857.55 00:26:29.409 clat (usec): min=40924, max=43064, avg=41965.01, stdev=157.88 00:26:29.409 lat (usec): min=40933, max=43078, avg=41975.70, stdev=158.43 00:26:29.409 clat percentiles (usec): 00:26:29.409 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:26:29.409 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:26:29.409 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:29.409 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:26:29.409 | 99.99th=[43254] 00:26:29.409 bw ( KiB/s): min= 352, max= 384, per=33.96%, avg=380.80, stdev= 9.85, samples=20 00:26:29.409 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:26:29.409 lat (msec) : 50=100.00% 00:26:29.409 cpu : usr=93.77%, sys=5.93%, ctx=14, majf=0, minf=145 00:26:29.409 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.409 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.409 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:29.409 filename1: (groupid=0, jobs=1): err= 0: pid=3504144: Wed May 15 04:27:17 2024 00:26:29.409 read: IOPS=184, BW=740KiB/s (757kB/s)(7408KiB/10017msec) 00:26:29.409 slat (nsec): min=7207, max=56765, avg=9316.44, stdev=3609.87 00:26:29.409 clat (usec): min=1039, max=42735, avg=21605.65, stdev=20357.62 00:26:29.409 lat (usec): min=1046, max=42792, avg=21614.96, stdev=20357.01 00:26:29.409 clat percentiles (usec): 00:26:29.409 | 1.00th=[ 1057], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1106], 00:26:29.409 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[41157], 60.00th=[41681], 00:26:29.409 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:26:29.409 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:26:29.409 | 99.99th=[42730] 00:26:29.409 bw ( KiB/s): min= 704, max= 768, per=66.04%, avg=739.20, stdev=32.67, samples=20 00:26:29.409 iops : min= 176, max= 192, avg=184.80, stdev= 8.17, samples=20 00:26:29.409 lat (msec) : 2=49.68%, 50=50.32% 00:26:29.409 cpu : usr=93.79%, sys=5.91%, ctx=20, majf=0, minf=132 00:26:29.409 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.409 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.409 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:29.409 00:26:29.409 Run status group 0 (all jobs): 00:26:29.409 READ: bw=1119KiB/s (1146kB/s), 381KiB/s-740KiB/s (390kB/s-757kB/s), io=11.0MiB (11.5MB), run=10017-10038msec 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.667 00:26:29.667 real 0m11.424s 00:26:29.667 user 0m20.204s 00:26:29.667 sys 0m1.478s 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:29.667 04:27:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:29.667 ************************************ 00:26:29.667 END TEST fio_dif_1_multi_subsystems 00:26:29.667 ************************************ 00:26:29.667 04:27:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:29.667 04:27:17 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:29.667 04:27:17 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:29.667 04:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:29.667 ************************************ 00:26:29.667 START TEST fio_dif_rand_params 00:26:29.667 ************************************ 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.667 bdev_null0 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.667 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.668 [2024-05-15 04:27:17.594627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.668 { 00:26:29.668 "params": { 00:26:29.668 "name": "Nvme$subsystem", 00:26:29.668 "trtype": "$TEST_TRANSPORT", 00:26:29.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.668 "adrfam": "ipv4", 00:26:29.668 "trsvcid": "$NVMF_PORT", 00:26:29.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.668 "hdgst": ${hdgst:-false}, 00:26:29.668 "ddgst": ${ddgst:-false} 00:26:29.668 }, 00:26:29.668 "method": "bdev_nvme_attach_controller" 00:26:29.668 } 00:26:29.668 EOF 00:26:29.668 )") 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:29.668 "params": { 00:26:29.668 "name": "Nvme0", 00:26:29.668 "trtype": "tcp", 00:26:29.668 "traddr": "10.0.0.2", 00:26:29.668 "adrfam": "ipv4", 00:26:29.668 "trsvcid": "4420", 00:26:29.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.668 "hdgst": false, 00:26:29.668 "ddgst": false 00:26:29.668 }, 00:26:29.668 "method": "bdev_nvme_attach_controller" 00:26:29.668 }' 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:29.668 04:27:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.926 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:29.926 ... 00:26:29.926 fio-3.35 00:26:29.926 Starting 3 threads 00:26:29.926 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.485 00:26:36.485 filename0: (groupid=0, jobs=1): err= 0: pid=3506081: Wed May 15 04:27:23 2024 00:26:36.485 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(128MiB/5007msec) 00:26:36.485 slat (nsec): min=4508, max=46994, avg=13237.49, stdev=2357.72 00:26:36.485 clat (usec): min=6561, max=54173, avg=14647.18, stdev=13493.56 00:26:36.485 lat (usec): min=6574, max=54188, avg=14660.42, stdev=13493.64 00:26:36.485 clat percentiles (usec): 00:26:36.485 | 1.00th=[ 6718], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 8455], 00:26:36.485 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:26:36.485 | 70.00th=[11469], 80.00th=[12518], 90.00th=[50070], 95.00th=[51643], 00:26:36.485 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:26:36.485 | 99.99th=[54264] 00:26:36.485 bw ( KiB/s): min=13056, max=35840, per=36.11%, avg=26137.60, stdev=7627.15, samples=10 00:26:36.485 iops : min= 102, max= 280, avg=204.20, stdev=59.59, samples=10 00:26:36.485 lat (msec) : 10=55.76%, 20=32.52%, 50=1.27%, 100=10.45% 00:26:36.485 cpu : usr=93.01%, sys=6.33%, ctx=78, majf=0, minf=94 00:26:36.485 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.485 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.485 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.485 filename0: (groupid=0, jobs=1): err= 0: pid=3506082: Wed May 15 04:27:23 2024 00:26:36.485 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(132MiB/5043msec) 00:26:36.485 slat (nsec): min=5088, max=28421, avg=13436.73, stdev=2297.31 00:26:36.485 clat (usec): min=6066, max=92237, avg=14295.71, stdev=13256.05 00:26:36.485 lat (usec): min=6080, max=92251, avg=14309.15, stdev=13255.98 00:26:36.485 clat percentiles (usec): 00:26:36.485 | 1.00th=[ 6325], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 8455], 00:26:36.485 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10290], 00:26:36.485 | 70.00th=[11600], 80.00th=[12780], 90.00th=[49021], 95.00th=[51119], 00:26:36.485 | 99.00th=[54264], 99.50th=[54789], 99.90th=[91751], 99.95th=[91751], 00:26:36.485 | 99.99th=[91751] 00:26:36.485 bw ( KiB/s): min=20736, max=37632, per=37.21%, avg=26931.20, stdev=5056.75, samples=10 00:26:36.485 iops : min= 162, max= 294, avg=210.40, stdev=39.51, samples=10 00:26:36.485 lat (msec) : 10=54.84%, 20=34.72%, 50=2.56%, 100=7.87% 00:26:36.485 cpu : usr=91.43%, sys=7.24%, ctx=13, majf=0, minf=95 00:26:36.485 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.485 issued rwts: total=1054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.485 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.486 filename0: (groupid=0, jobs=1): err= 0: pid=3506083: Wed May 15 04:27:23 2024 00:26:36.486 read: IOPS=154, BW=19.3MiB/s (20.2MB/s)(97.2MiB/5051msec) 00:26:36.486 slat (nsec): min=6941, max=48279, avg=18263.43, stdev=6489.26 00:26:36.486 clat (usec): min=7465, max=99481, avg=19447.17, stdev=17417.27 00:26:36.486 lat (usec): min=7482, max=99499, avg=19465.43, stdev=17417.11 00:26:36.486 clat percentiles (usec): 00:26:36.486 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9896], 00:26:36.486 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11469], 60.00th=[12387], 00:26:36.486 | 70.00th=[13698], 80.00th=[15926], 90.00th=[53216], 95.00th=[54789], 00:26:36.486 | 99.00th=[58459], 99.50th=[94897], 99.90th=[99091], 99.95th=[99091], 00:26:36.486 | 99.99th=[99091] 00:26:36.486 bw ( KiB/s): min=14592, max=29184, per=27.42%, avg=19844.00, stdev=4142.56, samples=10 00:26:36.486 iops : min= 114, max= 228, avg=155.00, stdev=32.36, samples=10 00:26:36.486 lat (msec) : 10=21.85%, 20=59.38%, 50=0.13%, 100=18.64% 00:26:36.486 cpu : usr=84.02%, sys=10.91%, ctx=695, majf=0, minf=105 00:26:36.486 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.486 issued rwts: total=778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.486 00:26:36.486 Run status group 0 (all jobs): 00:26:36.486 READ: bw=70.7MiB/s (74.1MB/s), 19.3MiB/s-26.1MiB/s (20.2MB/s-27.4MB/s), io=357MiB (374MB), run=5007-5051msec 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 bdev_null0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 [2024-05-15 04:27:23.866966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 bdev_null1 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.486 bdev_null2 00:26:36.486 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.487 { 00:26:36.487 "params": { 00:26:36.487 "name": "Nvme$subsystem", 00:26:36.487 "trtype": "$TEST_TRANSPORT", 00:26:36.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.487 "adrfam": "ipv4", 00:26:36.487 "trsvcid": "$NVMF_PORT", 00:26:36.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.487 "hdgst": ${hdgst:-false}, 00:26:36.487 "ddgst": ${ddgst:-false} 00:26:36.487 }, 00:26:36.487 "method": "bdev_nvme_attach_controller" 00:26:36.487 } 00:26:36.487 EOF 00:26:36.487 )") 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.487 { 00:26:36.487 "params": { 00:26:36.487 "name": "Nvme$subsystem", 00:26:36.487 "trtype": "$TEST_TRANSPORT", 00:26:36.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.487 "adrfam": "ipv4", 00:26:36.487 "trsvcid": "$NVMF_PORT", 00:26:36.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.487 "hdgst": ${hdgst:-false}, 00:26:36.487 "ddgst": ${ddgst:-false} 00:26:36.487 }, 00:26:36.487 "method": "bdev_nvme_attach_controller" 00:26:36.487 } 00:26:36.487 EOF 00:26:36.487 )") 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.487 { 00:26:36.487 "params": { 00:26:36.487 "name": "Nvme$subsystem", 00:26:36.487 "trtype": "$TEST_TRANSPORT", 00:26:36.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.487 "adrfam": "ipv4", 00:26:36.487 "trsvcid": "$NVMF_PORT", 00:26:36.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.487 "hdgst": ${hdgst:-false}, 00:26:36.487 "ddgst": ${ddgst:-false} 00:26:36.487 }, 00:26:36.487 "method": "bdev_nvme_attach_controller" 00:26:36.487 } 00:26:36.487 EOF 00:26:36.487 )") 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:36.487 04:27:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:36.487 "params": { 00:26:36.487 "name": "Nvme0", 00:26:36.487 "trtype": "tcp", 00:26:36.487 "traddr": "10.0.0.2", 00:26:36.487 "adrfam": "ipv4", 00:26:36.487 "trsvcid": "4420", 00:26:36.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:36.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:36.487 "hdgst": false, 00:26:36.487 "ddgst": false 00:26:36.487 }, 00:26:36.487 "method": "bdev_nvme_attach_controller" 00:26:36.487 },{ 00:26:36.487 "params": { 00:26:36.487 "name": "Nvme1", 00:26:36.487 "trtype": "tcp", 00:26:36.488 "traddr": "10.0.0.2", 00:26:36.488 "adrfam": "ipv4", 00:26:36.488 "trsvcid": "4420", 00:26:36.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.488 "hdgst": false, 00:26:36.488 "ddgst": false 00:26:36.488 }, 00:26:36.488 "method": "bdev_nvme_attach_controller" 00:26:36.488 },{ 00:26:36.488 "params": { 00:26:36.488 "name": "Nvme2", 00:26:36.488 "trtype": "tcp", 00:26:36.488 "traddr": "10.0.0.2", 00:26:36.488 "adrfam": "ipv4", 00:26:36.488 "trsvcid": "4420", 00:26:36.488 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:36.488 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:36.488 "hdgst": false, 00:26:36.488 "ddgst": false 00:26:36.488 }, 00:26:36.488 "method": "bdev_nvme_attach_controller" 00:26:36.488 }' 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:36.488 04:27:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:36.488 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:36.488 ... 00:26:36.488 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:36.488 ... 00:26:36.488 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:36.488 ... 00:26:36.488 fio-3.35 00:26:36.488 Starting 24 threads 00:26:36.488 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.793 00:26:48.793 filename0: (groupid=0, jobs=1): err= 0: pid=3506924: Wed May 15 04:27:35 2024 00:26:48.793 read: IOPS=262, BW=1049KiB/s (1074kB/s)(10.2MiB/10007msec) 00:26:48.793 slat (usec): min=8, max=154, avg=44.34, stdev=26.37 00:26:48.793 clat (msec): min=14, max=338, avg=60.68, stdev=77.65 00:26:48.793 lat (msec): min=14, max=338, avg=60.73, stdev=77.65 00:26:48.793 clat percentiles (msec): 00:26:48.793 | 1.00th=[ 20], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 33], 00:26:48.793 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:48.793 | 70.00th=[ 34], 80.00th=[ 40], 90.00th=[ 215], 95.00th=[ 300], 00:26:48.793 | 99.00th=[ 313], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:26:48.793 | 99.99th=[ 338] 00:26:48.793 bw ( KiB/s): min= 128, max= 2048, per=4.02%, avg=1043.20, stdev=851.51, samples=20 00:26:48.793 iops : min= 32, max= 512, avg=260.80, stdev=212.88, samples=20 00:26:48.793 lat (msec) : 20=1.18%, 50=86.78%, 100=1.07%, 250=2.44%, 500=8.54% 00:26:48.793 cpu : usr=98.02%, sys=1.40%, ctx=36, majf=0, minf=30 00:26:48.793 IO depths : 1=3.2%, 2=8.6%, 4=22.5%, 8=56.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:48.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.793 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.793 issued rwts: total=2624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.793 filename0: (groupid=0, jobs=1): err= 0: pid=3506925: Wed May 15 04:27:35 2024 00:26:48.793 read: IOPS=269, BW=1079KiB/s (1105kB/s)(10.5MiB/10004msec) 00:26:48.793 slat (usec): min=5, max=993, avg=28.84, stdev=25.42 00:26:48.793 clat (msec): min=3, max=447, avg=59.10, stdev=80.93 00:26:48.793 lat (msec): min=3, max=447, avg=59.13, stdev=80.93 00:26:48.793 clat percentiles (msec): 00:26:48.793 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 32], 00:26:48.793 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.793 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 157], 95.00th=[ 300], 00:26:48.793 | 99.00th=[ 317], 99.50th=[ 384], 99.90th=[ 430], 99.95th=[ 447], 00:26:48.793 | 99.99th=[ 447] 00:26:48.793 bw ( KiB/s): min= 128, max= 2032, per=3.96%, avg=1027.37, stdev=892.94, samples=19 00:26:48.793 iops : min= 32, max= 508, avg=256.84, stdev=223.24, samples=19 00:26:48.793 lat (msec) : 4=0.07%, 20=2.26%, 50=86.06%, 100=1.52%, 250=0.82% 00:26:48.793 lat (msec) : 500=9.27% 00:26:48.793 cpu : usr=92.69%, sys=3.39%, ctx=174, majf=0, minf=24 00:26:48.793 IO depths : 1=1.3%, 2=6.7%, 4=22.2%, 8=58.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:48.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.793 complete : 0=0.0%, 4=93.7%, 8=1.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.793 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.793 filename0: (groupid=0, jobs=1): err= 0: pid=3506926: Wed May 15 04:27:35 2024 00:26:48.793 read: IOPS=270, BW=1080KiB/s (1106kB/s)(10.6MiB/10014msec) 00:26:48.793 slat (usec): min=3, max=124, avg=34.21, stdev=14.03 00:26:48.793 clat (msec): min=17, max=413, avg=58.96, stdev=80.41 00:26:48.793 lat (msec): min=17, max=413, avg=58.99, stdev=80.41 00:26:48.793 clat percentiles (msec): 00:26:48.793 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.793 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.793 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 186], 95.00th=[ 296], 00:26:48.793 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 414], 00:26:48.793 | 99.99th=[ 414] 00:26:48.793 bw ( KiB/s): min= 128, max= 2048, per=4.14%, avg=1074.55, stdev=895.06, samples=20 00:26:48.793 iops : min= 32, max= 512, avg=268.60, stdev=223.72, samples=20 00:26:48.793 lat (msec) : 20=0.59%, 50=89.35%, 250=1.11%, 500=8.95% 00:26:48.793 cpu : usr=98.03%, sys=1.46%, ctx=22, majf=0, minf=18 00:26:48.793 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:48.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.793 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.793 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.793 filename0: (groupid=0, jobs=1): err= 0: pid=3506927: Wed May 15 04:27:35 2024 00:26:48.793 read: IOPS=276, BW=1107KiB/s (1133kB/s)(10.8MiB/10006msec) 00:26:48.793 slat (usec): min=6, max=214, avg=30.08, stdev=20.45 00:26:48.793 clat (msec): min=5, max=440, avg=57.60, stdev=76.74 00:26:48.793 lat (msec): min=5, max=440, avg=57.63, stdev=76.74 00:26:48.793 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.794 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 144], 95.00th=[ 296], 00:26:48.794 | 99.00th=[ 317], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 443], 00:26:48.794 | 99.99th=[ 443] 00:26:48.794 bw ( KiB/s): min= 128, max= 2048, per=4.08%, avg=1057.68, stdev=854.81, samples=19 00:26:48.794 iops : min= 32, max= 512, avg=264.42, stdev=213.70, samples=19 00:26:48.794 lat (msec) : 10=1.16%, 20=0.79%, 50=86.85%, 100=0.87%, 250=1.81% 00:26:48.794 lat (msec) : 500=8.53% 00:26:48.794 cpu : usr=95.77%, sys=2.14%, ctx=36, majf=0, minf=24 00:26:48.794 IO depths : 1=1.7%, 2=7.9%, 4=25.0%, 8=54.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:48.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.794 filename0: (groupid=0, jobs=1): err= 0: pid=3506928: Wed May 15 04:27:35 2024 00:26:48.794 read: IOPS=286, BW=1145KiB/s (1173kB/s)(11.2MiB/10006msec) 00:26:48.794 slat (usec): min=8, max=429, avg=30.60, stdev=20.58 00:26:48.794 clat (msec): min=5, max=300, avg=55.67, stdev=59.02 00:26:48.794 lat (msec): min=5, max=300, avg=55.70, stdev=59.02 00:26:48.794 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:48.794 | 70.00th=[ 34], 80.00th=[ 38], 90.00th=[ 184], 95.00th=[ 211], 00:26:48.794 | 99.00th=[ 251], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 300], 00:26:48.794 | 99.99th=[ 300] 00:26:48.794 bw ( KiB/s): min= 224, max= 2048, per=4.24%, avg=1098.53, stdev=798.56, samples=19 00:26:48.794 iops : min= 56, max= 512, avg=274.63, stdev=199.64, samples=19 00:26:48.794 lat (msec) : 10=1.01%, 20=1.29%, 50=82.69%, 100=1.19%, 250=12.50% 00:26:48.794 lat (msec) : 500=1.33% 00:26:48.794 cpu : usr=94.58%, sys=2.89%, ctx=177, majf=0, minf=56 00:26:48.794 IO depths : 1=3.0%, 2=7.9%, 4=20.7%, 8=58.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:26:48.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 issued rwts: total=2865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.794 filename0: (groupid=0, jobs=1): err= 0: pid=3506930: Wed May 15 04:27:35 2024 00:26:48.794 read: IOPS=268, BW=1074KiB/s (1100kB/s)(10.5MiB/10007msec) 00:26:48.794 slat (usec): min=8, max=112, avg=34.78, stdev=18.21 00:26:48.794 clat (msec): min=15, max=419, avg=59.28, stdev=77.37 00:26:48.794 lat (msec): min=15, max=419, avg=59.31, stdev=77.38 00:26:48.794 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 25], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.794 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 215], 95.00th=[ 300], 00:26:48.794 | 99.00th=[ 313], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 418], 00:26:48.794 | 99.99th=[ 418] 00:26:48.794 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1068.80, stdev=877.20, samples=20 00:26:48.794 iops : min= 32, max= 512, avg=267.20, stdev=219.30, samples=20 00:26:48.794 lat (msec) : 20=0.37%, 50=88.32%, 100=0.60%, 250=2.75%, 500=7.96% 00:26:48.794 cpu : usr=97.67%, sys=1.68%, ctx=58, majf=0, minf=25 00:26:48.794 IO depths : 1=4.8%, 2=10.8%, 4=24.1%, 8=52.6%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:48.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.794 filename0: (groupid=0, jobs=1): err= 0: pid=3506931: Wed May 15 04:27:35 2024 00:26:48.794 read: IOPS=274, BW=1096KiB/s (1123kB/s)(10.7MiB/10004msec) 00:26:48.794 slat (usec): min=8, max=313, avg=18.67, stdev=12.98 00:26:48.794 clat (msec): min=15, max=349, avg=58.23, stdev=72.39 00:26:48.794 lat (msec): min=15, max=349, avg=58.25, stdev=72.39 00:26:48.794 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 22], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.794 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 182], 95.00th=[ 288], 00:26:48.794 | 99.00th=[ 313], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 351], 00:26:48.794 | 99.99th=[ 351] 00:26:48.794 bw ( KiB/s): min= 128, max= 2032, per=4.04%, avg=1046.74, stdev=871.22, samples=19 00:26:48.794 iops : min= 32, max= 508, avg=261.68, stdev=217.80, samples=19 00:26:48.794 lat (msec) : 20=0.66%, 50=86.36%, 100=1.31%, 250=5.84%, 500=5.84% 00:26:48.794 cpu : usr=97.41%, sys=1.63%, ctx=23, majf=0, minf=22 00:26:48.794 IO depths : 1=1.4%, 2=6.0%, 4=19.3%, 8=60.9%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:48.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 complete : 0=0.0%, 4=93.2%, 8=2.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 issued rwts: total=2742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.794 filename0: (groupid=0, jobs=1): err= 0: pid=3506932: Wed May 15 04:27:35 2024 00:26:48.794 read: IOPS=268, BW=1074KiB/s (1100kB/s)(10.5MiB/10009msec) 00:26:48.794 slat (usec): min=5, max=115, avg=34.41, stdev=15.06 00:26:48.794 clat (msec): min=17, max=386, avg=59.28, stdev=80.22 00:26:48.794 lat (msec): min=17, max=386, avg=59.32, stdev=80.22 00:26:48.794 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 27], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.794 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 218], 95.00th=[ 296], 00:26:48.794 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:26:48.794 | 99.99th=[ 388] 00:26:48.794 bw ( KiB/s): min= 128, max= 2032, per=4.12%, avg=1068.80, stdev=887.97, samples=20 00:26:48.794 iops : min= 32, max= 508, avg=267.20, stdev=221.99, samples=20 00:26:48.794 lat (msec) : 20=0.60%, 50=88.91%, 100=0.37%, 250=0.60%, 500=9.52% 00:26:48.794 cpu : usr=98.02%, sys=1.47%, ctx=19, majf=0, minf=26 00:26:48.794 IO depths : 1=2.3%, 2=8.4%, 4=24.4%, 8=54.7%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:48.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.794 filename1: (groupid=0, jobs=1): err= 0: pid=3506933: Wed May 15 04:27:35 2024 00:26:48.794 read: IOPS=258, BW=1036KiB/s (1061kB/s)(10.1MiB/10005msec) 00:26:48.794 slat (usec): min=8, max=124, avg=42.35, stdev=28.00 00:26:48.794 clat (msec): min=5, max=383, avg=61.59, stdev=81.77 00:26:48.794 lat (msec): min=5, max=383, avg=61.64, stdev=81.77 00:26:48.794 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 33], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:48.794 | 70.00th=[ 35], 80.00th=[ 41], 90.00th=[ 228], 95.00th=[ 300], 00:26:48.794 | 99.00th=[ 317], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:26:48.794 | 99.99th=[ 384] 00:26:48.794 bw ( KiB/s): min= 128, max= 2016, per=3.78%, avg=980.63, stdev=845.93, samples=19 00:26:48.794 iops : min= 32, max= 504, avg=245.16, stdev=211.48, samples=19 00:26:48.794 lat (msec) : 10=0.23%, 20=1.74%, 50=85.68%, 100=1.85%, 250=0.62% 00:26:48.794 lat (msec) : 500=9.88% 00:26:48.794 cpu : usr=94.86%, sys=2.87%, ctx=207, majf=0, minf=27 00:26:48.794 IO depths : 1=0.7%, 2=1.5%, 4=6.9%, 8=76.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:26:48.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 complete : 0=0.0%, 4=90.3%, 8=6.8%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 issued rwts: total=2591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.794 filename1: (groupid=0, jobs=1): err= 0: pid=3506934: Wed May 15 04:27:35 2024 00:26:48.794 read: IOPS=275, BW=1100KiB/s (1126kB/s)(10.8MiB/10007msec) 00:26:48.794 slat (usec): min=6, max=498, avg=34.65, stdev=37.95 00:26:48.794 clat (msec): min=5, max=337, avg=57.91, stdev=75.90 00:26:48.794 lat (msec): min=5, max=338, avg=57.95, stdev=75.92 00:26:48.794 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.794 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 114], 95.00th=[ 296], 00:26:48.794 | 99.00th=[ 313], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:26:48.794 | 99.99th=[ 338] 00:26:48.794 bw ( KiB/s): min= 128, max= 2048, per=4.05%, avg=1050.95, stdev=860.38, samples=19 00:26:48.794 iops : min= 32, max= 512, avg=262.74, stdev=215.09, samples=19 00:26:48.794 lat (msec) : 10=1.09%, 20=1.78%, 50=84.81%, 100=1.93%, 250=2.25% 00:26:48.794 lat (msec) : 500=8.14% 00:26:48.794 cpu : usr=95.80%, sys=2.22%, ctx=79, majf=0, minf=25 00:26:48.794 IO depths : 1=2.8%, 2=7.3%, 4=21.5%, 8=58.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:48.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.794 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.794 filename1: (groupid=0, jobs=1): err= 0: pid=3506935: Wed May 15 04:27:35 2024 00:26:48.794 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10014msec) 00:26:48.794 slat (usec): min=8, max=108, avg=33.17, stdev=20.76 00:26:48.794 clat (msec): min=15, max=329, avg=56.68, stdev=61.06 00:26:48.794 lat (msec): min=15, max=329, avg=56.71, stdev=61.07 00:26:48.794 clat percentiles (msec): 00:26:48.794 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.794 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:48.794 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 186], 95.00th=[ 213], 00:26:48.794 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:26:48.794 | 99.99th=[ 330] 00:26:48.794 bw ( KiB/s): min= 224, max= 2096, per=4.32%, avg=1120.00, stdev=839.08, samples=20 00:26:48.794 iops : min= 56, max= 524, avg=280.00, stdev=209.77, samples=20 00:26:48.794 lat (msec) : 20=0.89%, 50=84.91%, 100=0.07%, 250=12.57%, 500=1.56% 00:26:48.794 cpu : usr=97.49%, sys=1.78%, ctx=102, majf=0, minf=35 00:26:48.794 IO depths : 1=2.5%, 2=5.2%, 4=12.4%, 8=67.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=91.2%, 8=5.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename1: (groupid=0, jobs=1): err= 0: pid=3506936: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10004msec) 00:26:48.795 slat (nsec): min=8579, max=91285, avg=34218.11, stdev=12539.34 00:26:48.795 clat (msec): min=11, max=403, avg=58.91, stdev=80.34 00:26:48.795 lat (msec): min=11, max=404, avg=58.94, stdev=80.34 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 27], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.795 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 205], 95.00th=[ 296], 00:26:48.795 | 99.00th=[ 363], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 405], 00:26:48.795 | 99.99th=[ 405] 00:26:48.795 bw ( KiB/s): min= 128, max= 2032, per=3.97%, avg=1029.89, stdev=895.26, samples=19 00:26:48.795 iops : min= 32, max= 508, avg=257.47, stdev=223.81, samples=19 00:26:48.795 lat (msec) : 20=0.59%, 50=89.35%, 250=1.04%, 500=9.02% 00:26:48.795 cpu : usr=98.40%, sys=1.11%, ctx=30, majf=0, minf=30 00:26:48.795 IO depths : 1=1.4%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename1: (groupid=0, jobs=1): err= 0: pid=3506938: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=254, BW=1020KiB/s (1044kB/s)(9.97MiB/10012msec) 00:26:48.795 slat (usec): min=8, max=125, avg=44.72, stdev=26.23 00:26:48.795 clat (msec): min=13, max=413, avg=62.46, stdev=82.29 00:26:48.795 lat (msec): min=13, max=413, avg=62.51, stdev=82.29 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 18], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:48.795 | 70.00th=[ 37], 80.00th=[ 44], 90.00th=[ 218], 95.00th=[ 300], 00:26:48.795 | 99.00th=[ 376], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 414], 00:26:48.795 | 99.99th=[ 414] 00:26:48.795 bw ( KiB/s): min= 128, max= 1936, per=3.91%, avg=1014.40, stdev=834.85, samples=20 00:26:48.795 iops : min= 32, max= 484, avg=253.60, stdev=208.71, samples=20 00:26:48.795 lat (msec) : 20=1.76%, 50=84.56%, 100=3.02%, 250=1.18%, 500=9.48% 00:26:48.795 cpu : usr=98.29%, sys=1.31%, ctx=21, majf=0, minf=27 00:26:48.795 IO depths : 1=1.2%, 2=5.3%, 4=19.7%, 8=61.7%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename1: (groupid=0, jobs=1): err= 0: pid=3506939: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10006msec) 00:26:48.795 slat (usec): min=9, max=225, avg=38.74, stdev=22.58 00:26:48.795 clat (msec): min=13, max=326, avg=58.89, stdev=77.43 00:26:48.795 lat (msec): min=13, max=326, avg=58.92, stdev=77.44 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.795 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 215], 95.00th=[ 288], 00:26:48.795 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:26:48.795 | 99.99th=[ 326] 00:26:48.795 bw ( KiB/s): min= 128, max= 2048, per=4.17%, avg=1080.80, stdev=888.32, samples=20 00:26:48.795 iops : min= 32, max= 512, avg=270.20, stdev=222.08, samples=20 00:26:48.795 lat (msec) : 20=0.15%, 50=89.20%, 250=1.78%, 500=8.88% 00:26:48.795 cpu : usr=97.23%, sys=1.64%, ctx=84, majf=0, minf=19 00:26:48.795 IO depths : 1=3.1%, 2=9.4%, 4=24.9%, 8=53.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename1: (groupid=0, jobs=1): err= 0: pid=3506940: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=287, BW=1150KiB/s (1178kB/s)(11.2MiB/10007msec) 00:26:48.795 slat (usec): min=8, max=892, avg=34.76, stdev=26.29 00:26:48.795 clat (msec): min=5, max=334, avg=55.38, stdev=61.99 00:26:48.795 lat (msec): min=5, max=334, avg=55.41, stdev=61.99 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 32], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.795 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 186], 95.00th=[ 213], 00:26:48.795 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 326], 99.95th=[ 334], 00:26:48.795 | 99.99th=[ 334] 00:26:48.795 bw ( KiB/s): min= 256, max= 2048, per=4.26%, avg=1103.58, stdev=823.61, samples=19 00:26:48.795 iops : min= 64, max= 512, avg=275.89, stdev=205.90, samples=19 00:26:48.795 lat (msec) : 10=1.11%, 20=2.40%, 50=81.37%, 100=1.56%, 250=11.75% 00:26:48.795 lat (msec) : 500=1.81% 00:26:48.795 cpu : usr=91.54%, sys=3.96%, ctx=70, majf=0, minf=28 00:26:48.795 IO depths : 1=3.9%, 2=8.7%, 4=21.0%, 8=57.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename1: (groupid=0, jobs=1): err= 0: pid=3506941: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=271, BW=1087KiB/s (1113kB/s)(10.6MiB/10008msec) 00:26:48.795 slat (usec): min=8, max=1261, avg=50.59, stdev=35.50 00:26:48.795 clat (msec): min=11, max=386, avg=58.49, stdev=79.74 00:26:48.795 lat (msec): min=11, max=386, avg=58.54, stdev=79.75 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.795 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 54], 95.00th=[ 296], 00:26:48.795 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 384], 99.95th=[ 384], 00:26:48.795 | 99.99th=[ 388] 00:26:48.795 bw ( KiB/s): min= 128, max= 2080, per=4.17%, avg=1081.60, stdev=901.90, samples=20 00:26:48.795 iops : min= 32, max= 520, avg=270.40, stdev=225.47, samples=20 00:26:48.795 lat (msec) : 20=1.03%, 50=88.90%, 100=0.07%, 250=0.74%, 500=9.26% 00:26:48.795 cpu : usr=94.55%, sys=2.64%, ctx=37, majf=0, minf=27 00:26:48.795 IO depths : 1=1.9%, 2=7.9%, 4=24.3%, 8=55.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename2: (groupid=0, jobs=1): err= 0: pid=3506942: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=267, BW=1068KiB/s (1094kB/s)(10.4MiB/10007msec) 00:26:48.795 slat (usec): min=8, max=323, avg=35.47, stdev=28.14 00:26:48.795 clat (msec): min=9, max=381, avg=59.64, stdev=78.06 00:26:48.795 lat (msec): min=9, max=381, avg=59.67, stdev=78.08 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.795 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 211], 95.00th=[ 288], 00:26:48.795 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 376], 99.95th=[ 380], 00:26:48.795 | 99.99th=[ 380] 00:26:48.795 bw ( KiB/s): min= 128, max= 2048, per=4.10%, avg=1062.40, stdev=869.12, samples=20 00:26:48.795 iops : min= 32, max= 512, avg=265.60, stdev=217.28, samples=20 00:26:48.795 lat (msec) : 10=0.15%, 20=1.76%, 50=86.68%, 100=0.64%, 250=2.10% 00:26:48.795 lat (msec) : 500=8.68% 00:26:48.795 cpu : usr=96.91%, sys=1.81%, ctx=71, majf=0, minf=25 00:26:48.795 IO depths : 1=3.6%, 2=8.5%, 4=22.2%, 8=56.6%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename2: (groupid=0, jobs=1): err= 0: pid=3506943: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=279, BW=1120KiB/s (1146kB/s)(10.9MiB/10004msec) 00:26:48.795 slat (usec): min=8, max=136, avg=27.28, stdev=14.46 00:26:48.795 clat (msec): min=13, max=283, avg=56.95, stdev=59.42 00:26:48.795 lat (msec): min=13, max=283, avg=56.98, stdev=59.41 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.795 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 190], 95.00th=[ 215], 00:26:48.795 | 99.00th=[ 222], 99.50th=[ 230], 99.90th=[ 243], 99.95th=[ 284], 00:26:48.795 | 99.99th=[ 284] 00:26:48.795 bw ( KiB/s): min= 256, max= 2048, per=4.14%, avg=1072.84, stdev=827.16, samples=19 00:26:48.795 iops : min= 64, max= 512, avg=268.21, stdev=206.79, samples=19 00:26:48.795 lat (msec) : 20=0.57%, 50=84.21%, 100=0.71%, 250=14.43%, 500=0.07% 00:26:48.795 cpu : usr=98.12%, sys=1.34%, ctx=26, majf=0, minf=24 00:26:48.795 IO depths : 1=2.7%, 2=7.7%, 4=21.5%, 8=57.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:48.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.795 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.795 filename2: (groupid=0, jobs=1): err= 0: pid=3506944: Wed May 15 04:27:35 2024 00:26:48.795 read: IOPS=278, BW=1116KiB/s (1142kB/s)(10.9MiB/10003msec) 00:26:48.795 slat (usec): min=8, max=164, avg=27.02, stdev=24.16 00:26:48.795 clat (msec): min=14, max=367, avg=57.17, stdev=62.60 00:26:48.795 lat (msec): min=14, max=367, avg=57.19, stdev=62.59 00:26:48.795 clat percentiles (msec): 00:26:48.795 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.795 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.795 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 190], 95.00th=[ 218], 00:26:48.795 | 99.00th=[ 288], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 368], 00:26:48.795 | 99.99th=[ 368] 00:26:48.795 bw ( KiB/s): min= 128, max= 2032, per=4.11%, avg=1066.95, stdev=834.95, samples=19 00:26:48.795 iops : min= 32, max= 508, avg=266.74, stdev=208.74, samples=19 00:26:48.795 lat (msec) : 20=0.82%, 50=84.62%, 100=0.57%, 250=12.54%, 500=1.43% 00:26:48.795 cpu : usr=98.38%, sys=1.21%, ctx=16, majf=0, minf=33 00:26:48.796 IO depths : 1=1.9%, 2=7.2%, 4=22.2%, 8=57.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:48.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 issued rwts: total=2790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.796 filename2: (groupid=0, jobs=1): err= 0: pid=3506945: Wed May 15 04:27:35 2024 00:26:48.796 read: IOPS=268, BW=1075KiB/s (1101kB/s)(10.5MiB/10003msec) 00:26:48.796 slat (usec): min=8, max=108, avg=29.86, stdev=14.42 00:26:48.796 clat (msec): min=4, max=386, avg=59.30, stdev=80.31 00:26:48.796 lat (msec): min=4, max=386, avg=59.33, stdev=80.31 00:26:48.796 clat percentiles (msec): 00:26:48.796 | 1.00th=[ 18], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.796 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.796 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 199], 95.00th=[ 296], 00:26:48.796 | 99.00th=[ 321], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 388], 00:26:48.796 | 99.99th=[ 388] 00:26:48.796 bw ( KiB/s): min= 128, max= 2032, per=3.95%, avg=1023.16, stdev=887.76, samples=19 00:26:48.796 iops : min= 32, max= 508, avg=255.79, stdev=221.94, samples=19 00:26:48.796 lat (msec) : 10=0.07%, 20=1.26%, 50=88.47%, 100=0.07%, 250=0.89% 00:26:48.796 lat (msec) : 500=9.23% 00:26:48.796 cpu : usr=98.14%, sys=1.43%, ctx=27, majf=0, minf=28 00:26:48.796 IO depths : 1=0.9%, 2=6.2%, 4=22.4%, 8=58.5%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:48.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.796 filename2: (groupid=0, jobs=1): err= 0: pid=3506946: Wed May 15 04:27:35 2024 00:26:48.796 read: IOPS=265, BW=1062KiB/s (1088kB/s)(10.4MiB/10022msec) 00:26:48.796 slat (nsec): min=8201, max=92741, avg=27925.22, stdev=14160.89 00:26:48.796 clat (msec): min=14, max=441, avg=60.02, stdev=73.34 00:26:48.796 lat (msec): min=14, max=441, avg=60.05, stdev=73.34 00:26:48.796 clat percentiles (msec): 00:26:48.796 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:26:48.796 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:48.796 | 70.00th=[ 34], 80.00th=[ 41], 90.00th=[ 201], 95.00th=[ 284], 00:26:48.796 | 99.00th=[ 309], 99.50th=[ 338], 99.90th=[ 384], 99.95th=[ 443], 00:26:48.796 | 99.99th=[ 443] 00:26:48.796 bw ( KiB/s): min= 128, max= 2032, per=4.09%, avg=1060.00, stdev=844.58, samples=20 00:26:48.796 iops : min= 32, max= 508, avg=265.00, stdev=211.15, samples=20 00:26:48.796 lat (msec) : 20=0.68%, 50=85.27%, 100=2.03%, 250=6.24%, 500=5.79% 00:26:48.796 cpu : usr=98.40%, sys=1.14%, ctx=57, majf=0, minf=30 00:26:48.796 IO depths : 1=0.5%, 2=4.7%, 4=19.7%, 8=62.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:48.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 complete : 0=0.0%, 4=93.5%, 8=1.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 issued rwts: total=2662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.796 filename2: (groupid=0, jobs=1): err= 0: pid=3506948: Wed May 15 04:27:35 2024 00:26:48.796 read: IOPS=266, BW=1066KiB/s (1091kB/s)(10.5MiB/10052msec) 00:26:48.796 slat (nsec): min=5850, max=96116, avg=30274.94, stdev=13115.44 00:26:48.796 clat (msec): min=22, max=500, avg=59.69, stdev=80.97 00:26:48.796 lat (msec): min=22, max=500, avg=59.72, stdev=80.97 00:26:48.796 clat percentiles (msec): 00:26:48.796 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.796 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.796 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 205], 95.00th=[ 296], 00:26:48.796 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 405], 99.95th=[ 502], 00:26:48.796 | 99.99th=[ 502] 00:26:48.796 bw ( KiB/s): min= 112, max= 2048, per=4.12%, avg=1067.20, stdev=888.19, samples=20 00:26:48.796 iops : min= 28, max= 512, avg=266.80, stdev=222.05, samples=20 00:26:48.796 lat (msec) : 50=89.47%, 100=0.37%, 250=1.12%, 500=8.96%, 750=0.07% 00:26:48.796 cpu : usr=98.31%, sys=1.26%, ctx=19, majf=0, minf=31 00:26:48.796 IO depths : 1=3.8%, 2=8.8%, 4=21.2%, 8=56.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:48.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 complete : 0=0.0%, 4=93.5%, 8=1.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 issued rwts: total=2678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.796 filename2: (groupid=0, jobs=1): err= 0: pid=3506949: Wed May 15 04:27:35 2024 00:26:48.796 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.2MiB/10008msec) 00:26:48.796 slat (usec): min=4, max=147, avg=29.34, stdev=18.21 00:26:48.796 clat (msec): min=9, max=377, avg=60.93, stdev=79.82 00:26:48.796 lat (msec): min=9, max=377, avg=60.96, stdev=79.82 00:26:48.796 clat percentiles (msec): 00:26:48.796 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 32], 00:26:48.796 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:48.796 | 70.00th=[ 34], 80.00th=[ 40], 90.00th=[ 243], 95.00th=[ 300], 00:26:48.796 | 99.00th=[ 313], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 380], 00:26:48.796 | 99.99th=[ 380] 00:26:48.796 bw ( KiB/s): min= 128, max= 1968, per=4.02%, avg=1041.60, stdev=847.94, samples=20 00:26:48.796 iops : min= 32, max= 492, avg=260.40, stdev=211.99, samples=20 00:26:48.796 lat (msec) : 10=0.04%, 20=1.37%, 50=85.61%, 100=2.60%, 250=1.30% 00:26:48.796 lat (msec) : 500=9.08% 00:26:48.796 cpu : usr=93.90%, sys=2.94%, ctx=119, majf=0, minf=24 00:26:48.796 IO depths : 1=1.0%, 2=4.0%, 4=14.8%, 8=66.5%, 16=13.7%, 32=0.0%, >=64=0.0% 00:26:48.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 complete : 0=0.0%, 4=92.4%, 8=3.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.796 filename2: (groupid=0, jobs=1): err= 0: pid=3506950: Wed May 15 04:27:35 2024 00:26:48.796 read: IOPS=270, BW=1082KiB/s (1108kB/s)(10.6MiB/10044msec) 00:26:48.796 slat (usec): min=8, max=191, avg=28.82, stdev=14.91 00:26:48.796 clat (msec): min=13, max=428, avg=58.86, stdev=80.43 00:26:48.796 lat (msec): min=13, max=428, avg=58.89, stdev=80.43 00:26:48.796 clat percentiles (msec): 00:26:48.796 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 32], 00:26:48.796 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:48.796 | 70.00th=[ 34], 80.00th=[ 38], 90.00th=[ 167], 95.00th=[ 296], 00:26:48.796 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 426], 99.95th=[ 430], 00:26:48.796 | 99.99th=[ 430] 00:26:48.796 bw ( KiB/s): min= 128, max= 2192, per=4.18%, avg=1082.40, stdev=911.14, samples=20 00:26:48.796 iops : min= 32, max= 548, avg=270.60, stdev=227.78, samples=20 00:26:48.796 lat (msec) : 20=2.72%, 50=85.36%, 100=1.91%, 250=1.10%, 500=8.90% 00:26:48.796 cpu : usr=94.06%, sys=3.06%, ctx=42, majf=0, minf=24 00:26:48.796 IO depths : 1=1.0%, 2=4.6%, 4=17.0%, 8=64.9%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:48.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 complete : 0=0.0%, 4=92.4%, 8=3.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.796 issued rwts: total=2718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:48.796 00:26:48.796 Run status group 0 (all jobs): 00:26:48.796 READ: bw=25.3MiB/s (26.5MB/s), 1020KiB/s-1150KiB/s (1044kB/s-1178kB/s), io=254MiB (267MB), run=10003-10052msec 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:48.796 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 bdev_null0 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 [2024-05-15 04:27:35.722574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 bdev_null1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:48.797 { 00:26:48.797 "params": { 00:26:48.797 "name": "Nvme$subsystem", 00:26:48.797 "trtype": "$TEST_TRANSPORT", 00:26:48.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.797 "adrfam": "ipv4", 00:26:48.797 "trsvcid": "$NVMF_PORT", 00:26:48.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.797 "hdgst": ${hdgst:-false}, 00:26:48.797 "ddgst": ${ddgst:-false} 00:26:48.797 }, 00:26:48.797 "method": "bdev_nvme_attach_controller" 00:26:48.797 } 00:26:48.797 EOF 00:26:48.797 )") 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:48.797 { 00:26:48.797 "params": { 00:26:48.797 "name": "Nvme$subsystem", 00:26:48.797 "trtype": "$TEST_TRANSPORT", 00:26:48.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.797 "adrfam": "ipv4", 00:26:48.797 "trsvcid": "$NVMF_PORT", 00:26:48.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.797 "hdgst": ${hdgst:-false}, 00:26:48.797 "ddgst": ${ddgst:-false} 00:26:48.797 }, 00:26:48.797 "method": "bdev_nvme_attach_controller" 00:26:48.797 } 00:26:48.797 EOF 00:26:48.797 )") 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:48.797 "params": { 00:26:48.797 "name": "Nvme0", 00:26:48.797 "trtype": "tcp", 00:26:48.797 "traddr": "10.0.0.2", 00:26:48.797 "adrfam": "ipv4", 00:26:48.797 "trsvcid": "4420", 00:26:48.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:48.797 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:48.797 "hdgst": false, 00:26:48.797 "ddgst": false 00:26:48.797 }, 00:26:48.797 "method": "bdev_nvme_attach_controller" 00:26:48.797 },{ 00:26:48.797 "params": { 00:26:48.797 "name": "Nvme1", 00:26:48.797 "trtype": "tcp", 00:26:48.797 "traddr": "10.0.0.2", 00:26:48.797 "adrfam": "ipv4", 00:26:48.797 "trsvcid": "4420", 00:26:48.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:48.797 "hdgst": false, 00:26:48.797 "ddgst": false 00:26:48.797 }, 00:26:48.797 "method": "bdev_nvme_attach_controller" 00:26:48.797 }' 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:48.797 04:27:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:48.797 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:48.797 ... 00:26:48.797 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:48.797 ... 00:26:48.797 fio-3.35 00:26:48.797 Starting 4 threads 00:26:48.797 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.057 00:26:54.057 filename0: (groupid=0, jobs=1): err= 0: pid=3508267: Wed May 15 04:27:41 2024 00:26:54.057 read: IOPS=1856, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5003msec) 00:26:54.057 slat (nsec): min=4341, max=39841, avg=11138.11, stdev=3972.17 00:26:54.057 clat (usec): min=2380, max=8791, avg=4272.96, stdev=528.07 00:26:54.057 lat (usec): min=2393, max=8830, avg=4284.10, stdev=528.08 00:26:54.057 clat percentiles (usec): 00:26:54.057 | 1.00th=[ 3261], 5.00th=[ 3720], 10.00th=[ 3884], 20.00th=[ 3982], 00:26:54.057 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:54.057 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 5211], 00:26:54.057 | 99.00th=[ 6587], 99.50th=[ 6652], 99.90th=[ 7111], 99.95th=[ 8586], 00:26:54.057 | 99.99th=[ 8848] 00:26:54.057 bw ( KiB/s): min=13920, max=15696, per=25.48%, avg=14854.40, stdev=529.33, samples=10 00:26:54.057 iops : min= 1740, max= 1962, avg=1856.80, stdev=66.17, samples=10 00:26:54.057 lat (msec) : 4=23.09%, 10=76.91% 00:26:54.057 cpu : usr=92.32%, sys=7.18%, ctx=13, majf=0, minf=20 00:26:54.057 IO depths : 1=0.1%, 2=2.1%, 4=71.6%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 issued rwts: total=9289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:54.057 filename0: (groupid=0, jobs=1): err= 0: pid=3508268: Wed May 15 04:27:41 2024 00:26:54.057 read: IOPS=1778, BW=13.9MiB/s (14.6MB/s)(69.5MiB/5002msec) 00:26:54.057 slat (nsec): min=3963, max=39433, avg=11127.55, stdev=4184.09 00:26:54.057 clat (usec): min=1529, max=8353, avg=4463.08, stdev=738.56 00:26:54.057 lat (usec): min=1537, max=8366, avg=4474.20, stdev=737.78 00:26:54.057 clat percentiles (usec): 00:26:54.057 | 1.00th=[ 3523], 5.00th=[ 3884], 10.00th=[ 3949], 20.00th=[ 4080], 00:26:54.057 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:54.057 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 6063], 95.00th=[ 6390], 00:26:54.057 | 99.00th=[ 6718], 99.50th=[ 6783], 99.90th=[ 7111], 99.95th=[ 7242], 00:26:54.057 | 99.99th=[ 8356] 00:26:54.057 bw ( KiB/s): min=13616, max=15360, per=24.40%, avg=14225.20, stdev=559.06, samples=10 00:26:54.057 iops : min= 1702, max= 1920, avg=1778.10, stdev=69.89, samples=10 00:26:54.057 lat (msec) : 2=0.03%, 4=13.71%, 10=86.25% 00:26:54.057 cpu : usr=93.00%, sys=6.50%, ctx=16, majf=0, minf=46 00:26:54.057 IO depths : 1=0.1%, 2=0.2%, 4=73.2%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 issued rwts: total=8897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:54.057 filename1: (groupid=0, jobs=1): err= 0: pid=3508269: Wed May 15 04:27:41 2024 00:26:54.057 read: IOPS=1860, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5003msec) 00:26:54.057 slat (nsec): min=4095, max=43106, avg=11038.64, stdev=4034.57 00:26:54.057 clat (usec): min=2319, max=7803, avg=4264.86, stdev=499.12 00:26:54.057 lat (usec): min=2327, max=7816, avg=4275.90, stdev=499.03 00:26:54.057 clat percentiles (usec): 00:26:54.057 | 1.00th=[ 3326], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 4015], 00:26:54.057 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:26:54.057 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 5014], 00:26:54.057 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7373], 99.95th=[ 7635], 00:26:54.057 | 99.99th=[ 7832] 00:26:54.057 bw ( KiB/s): min=14016, max=15616, per=25.53%, avg=14884.50, stdev=490.19, samples=10 00:26:54.057 iops : min= 1752, max= 1952, avg=1860.50, stdev=61.27, samples=10 00:26:54.057 lat (msec) : 4=18.97%, 10=81.03% 00:26:54.057 cpu : usr=91.76%, sys=7.72%, ctx=6, majf=0, minf=50 00:26:54.057 IO depths : 1=0.1%, 2=2.0%, 4=70.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 issued rwts: total=9309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:54.057 filename1: (groupid=0, jobs=1): err= 0: pid=3508271: Wed May 15 04:27:41 2024 00:26:54.057 read: IOPS=1791, BW=14.0MiB/s (14.7MB/s)(70.0MiB/5002msec) 00:26:54.057 slat (nsec): min=4447, max=44365, avg=11055.05, stdev=4278.86 00:26:54.057 clat (usec): min=1791, max=49679, avg=4431.89, stdev=1502.50 00:26:54.057 lat (usec): min=1799, max=49691, avg=4442.95, stdev=1502.09 00:26:54.057 clat percentiles (usec): 00:26:54.057 | 1.00th=[ 3490], 5.00th=[ 3851], 10.00th=[ 3949], 20.00th=[ 4047], 00:26:54.057 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:54.057 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 6194], 00:26:54.057 | 99.00th=[ 6652], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[49546], 00:26:54.057 | 99.99th=[49546] 00:26:54.057 bw ( KiB/s): min=12528, max=15312, per=24.58%, avg=14328.00, stdev=814.56, samples=10 00:26:54.057 iops : min= 1566, max= 1914, avg=1791.00, stdev=101.82, samples=10 00:26:54.057 lat (msec) : 2=0.04%, 4=15.54%, 10=84.33%, 50=0.09% 00:26:54.057 cpu : usr=92.60%, sys=6.92%, ctx=8, majf=0, minf=50 00:26:54.057 IO depths : 1=0.1%, 2=0.6%, 4=72.8%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.057 issued rwts: total=8960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:54.057 00:26:54.057 Run status group 0 (all jobs): 00:26:54.057 READ: bw=56.9MiB/s (59.7MB/s), 13.9MiB/s-14.5MiB/s (14.6MB/s-15.2MB/s), io=285MiB (299MB), run=5002-5003msec 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.315 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.316 04:27:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:54.316 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.316 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.316 00:26:54.316 real 0m24.568s 00:26:54.316 user 4m28.885s 00:26:54.316 sys 0m8.442s 00:26:54.316 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:54.316 04:27:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 ************************************ 00:26:54.316 END TEST fio_dif_rand_params 00:26:54.316 ************************************ 00:26:54.316 04:27:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:54.316 04:27:42 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:54.316 04:27:42 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:54.316 04:27:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 ************************************ 00:26:54.316 START TEST fio_dif_digest 00:26:54.316 ************************************ 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 bdev_null0 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:54.316 [2024-05-15 04:27:42.211037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.316 { 00:26:54.316 "params": { 00:26:54.316 "name": "Nvme$subsystem", 00:26:54.316 "trtype": "$TEST_TRANSPORT", 00:26:54.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.316 "adrfam": "ipv4", 00:26:54.316 "trsvcid": "$NVMF_PORT", 00:26:54.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.316 "hdgst": ${hdgst:-false}, 00:26:54.316 "ddgst": ${ddgst:-false} 00:26:54.316 }, 00:26:54.316 "method": "bdev_nvme_attach_controller" 00:26:54.316 } 00:26:54.316 EOF 00:26:54.316 )") 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:54.316 04:27:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:54.316 "params": { 00:26:54.316 "name": "Nvme0", 00:26:54.316 "trtype": "tcp", 00:26:54.316 "traddr": "10.0.0.2", 00:26:54.317 "adrfam": "ipv4", 00:26:54.317 "trsvcid": "4420", 00:26:54.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:54.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:54.317 "hdgst": true, 00:26:54.317 "ddgst": true 00:26:54.317 }, 00:26:54.317 "method": "bdev_nvme_attach_controller" 00:26:54.317 }' 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:54.317 04:27:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:54.574 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:54.574 ... 00:26:54.574 fio-3.35 00:26:54.574 Starting 3 threads 00:26:54.574 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.768 00:27:06.768 filename0: (groupid=0, jobs=1): err= 0: pid=3509107: Wed May 15 04:27:52 2024 00:27:06.768 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(262MiB/10038msec) 00:27:06.768 slat (nsec): min=4842, max=38985, avg=13722.71, stdev=3589.04 00:27:06.768 clat (usec): min=6526, max=93381, avg=14335.24, stdev=11064.30 00:27:06.768 lat (usec): min=6538, max=93394, avg=14348.97, stdev=11064.21 00:27:06.768 clat percentiles (usec): 00:27:06.768 | 1.00th=[ 7046], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9634], 00:27:06.768 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11863], 60.00th=[12518], 00:27:06.768 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14484], 95.00th=[51643], 00:27:06.768 | 99.00th=[54789], 99.50th=[55313], 99.90th=[92799], 99.95th=[92799], 00:27:06.768 | 99.99th=[93848] 00:27:06.768 bw ( KiB/s): min=20224, max=32256, per=41.23%, avg=26816.00, stdev=3306.55, samples=20 00:27:06.768 iops : min= 158, max= 252, avg=209.50, stdev=25.83, samples=20 00:27:06.768 lat (msec) : 10=25.83%, 20=67.21%, 50=0.76%, 100=6.20% 00:27:06.768 cpu : usr=90.73%, sys=8.74%, ctx=16, majf=0, minf=137 00:27:06.768 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.768 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.768 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:06.768 filename0: (groupid=0, jobs=1): err= 0: pid=3509108: Wed May 15 04:27:52 2024 00:27:06.768 read: IOPS=145, BW=18.2MiB/s (19.1MB/s)(183MiB/10046msec) 00:27:06.768 slat (nsec): min=3715, max=51442, avg=15394.97, stdev=4969.93 00:27:06.768 clat (usec): min=7132, max=99045, avg=20579.08, stdev=14475.92 00:27:06.768 lat (usec): min=7145, max=99058, avg=20594.48, stdev=14476.03 00:27:06.768 clat percentiles (usec): 00:27:06.768 | 1.00th=[ 7767], 5.00th=[10159], 10.00th=[10814], 20.00th=[13435], 00:27:06.768 | 30.00th=[14746], 40.00th=[15664], 50.00th=[16450], 60.00th=[17171], 00:27:06.768 | 70.00th=[17695], 80.00th=[18744], 90.00th=[52691], 95.00th=[56361], 00:27:06.768 | 99.00th=[59507], 99.50th=[93848], 99.90th=[99091], 99.95th=[99091], 00:27:06.768 | 99.99th=[99091] 00:27:06.768 bw ( KiB/s): min=14336, max=24320, per=28.71%, avg=18675.20, stdev=2484.06, samples=20 00:27:06.768 iops : min= 112, max= 190, avg=145.90, stdev=19.41, samples=20 00:27:06.768 lat (msec) : 10=4.72%, 20=81.18%, 50=1.64%, 100=12.46% 00:27:06.768 cpu : usr=92.27%, sys=7.24%, ctx=19, majf=0, minf=208 00:27:06.768 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.768 issued rwts: total=1461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.768 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:06.768 filename0: (groupid=0, jobs=1): err= 0: pid=3509109: Wed May 15 04:27:52 2024 00:27:06.768 read: IOPS=153, BW=19.2MiB/s (20.2MB/s)(193MiB/10046msec) 00:27:06.768 slat (nsec): min=4581, max=58638, avg=14157.00, stdev=3851.85 00:27:06.768 clat (usec): min=6425, max=99049, avg=19447.49, stdev=14407.48 00:27:06.768 lat (usec): min=6452, max=99062, avg=19461.65, stdev=14407.51 00:27:06.768 clat percentiles (usec): 00:27:06.768 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 9110], 20.00th=[11338], 00:27:06.768 | 30.00th=[13566], 40.00th=[15401], 50.00th=[16319], 60.00th=[17171], 00:27:06.768 | 70.00th=[17957], 80.00th=[19006], 90.00th=[50594], 95.00th=[56361], 00:27:06.768 | 99.00th=[60031], 99.50th=[93848], 99.90th=[96994], 99.95th=[99091], 00:27:06.768 | 99.99th=[99091] 00:27:06.768 bw ( KiB/s): min=12288, max=24576, per=30.37%, avg=19752.30, stdev=3164.87, samples=20 00:27:06.768 iops : min= 96, max= 192, avg=154.30, stdev=24.73, samples=20 00:27:06.768 lat (msec) : 10=11.71%, 20=74.77%, 50=3.36%, 100=10.16% 00:27:06.768 cpu : usr=91.51%, sys=7.98%, ctx=27, majf=0, minf=249 00:27:06.768 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.768 issued rwts: total=1546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.768 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:06.768 00:27:06.768 Run status group 0 (all jobs): 00:27:06.768 READ: bw=63.5MiB/s (66.6MB/s), 18.2MiB/s-26.1MiB/s (19.1MB/s-27.4MB/s), io=638MiB (669MB), run=10038-10046msec 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.768 00:27:06.768 real 0m11.125s 00:27:06.768 user 0m28.677s 00:27:06.768 sys 0m2.685s 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:06.768 04:27:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:06.768 ************************************ 00:27:06.768 END TEST fio_dif_digest 00:27:06.768 ************************************ 00:27:06.768 04:27:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:06.768 04:27:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:06.768 rmmod nvme_tcp 00:27:06.768 rmmod nvme_fabrics 00:27:06.768 rmmod nvme_keyring 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3502304 ']' 00:27:06.768 04:27:53 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3502304 00:27:06.768 04:27:53 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3502304 ']' 00:27:06.768 04:27:53 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3502304 00:27:06.768 04:27:53 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:27:06.768 04:27:53 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:06.768 04:27:53 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3502304 00:27:06.768 04:27:53 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:06.769 04:27:53 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:06.769 04:27:53 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3502304' 00:27:06.769 killing process with pid 3502304 00:27:06.769 04:27:53 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3502304 00:27:06.769 [2024-05-15 04:27:53.421772] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:06.769 04:27:53 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3502304 00:27:06.769 04:27:53 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:06.769 04:27:53 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:07.027 Waiting for block devices as requested 00:27:07.027 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:07.027 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:07.285 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:07.285 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:07.285 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:07.285 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:07.543 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:07.543 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:07.543 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:07.543 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:07.800 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:07.800 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:07.800 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:08.058 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:08.058 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:08.058 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:08.058 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:08.317 04:27:56 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.317 04:27:56 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.317 04:27:56 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.317 04:27:56 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.317 04:27:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.317 04:27:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:08.317 04:27:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.227 04:27:58 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.227 00:27:10.227 real 1m8.306s 00:27:10.227 user 6m26.334s 00:27:10.227 sys 0m21.013s 00:27:10.227 04:27:58 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:10.227 04:27:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:10.227 ************************************ 00:27:10.227 END TEST nvmf_dif 00:27:10.227 ************************************ 00:27:10.227 04:27:58 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:10.227 04:27:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:10.227 04:27:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:10.227 04:27:58 -- common/autotest_common.sh@10 -- # set +x 00:27:10.227 ************************************ 00:27:10.227 START TEST nvmf_abort_qd_sizes 00:27:10.227 ************************************ 00:27:10.227 04:27:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:10.527 * Looking for test storage... 00:27:10.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.527 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.528 04:27:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.057 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.058 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.058 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:27:13.058 00:27:13.058 --- 10.0.0.2 ping statistics --- 00:27:13.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.058 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:13.058 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:27:13.058 00:27:13.058 --- 10.0.0.1 ping statistics --- 00:27:13.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.058 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:27:13.058 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.058 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:13.058 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:13.058 04:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:14.431 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:14.431 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:14.431 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:14.431 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:14.431 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:14.431 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:14.431 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:14.431 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:14.431 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:15.366 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3514507 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3514507 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3514507 ']' 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.366 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:15.367 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.367 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:15.367 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:15.367 [2024-05-15 04:28:03.346876] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:27:15.367 [2024-05-15 04:28:03.347016] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.623 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.623 [2024-05-15 04:28:03.422862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.623 [2024-05-15 04:28:03.532245] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.623 [2024-05-15 04:28:03.532294] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.623 [2024-05-15 04:28:03.532322] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.623 [2024-05-15 04:28:03.532333] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.623 [2024-05-15 04:28:03.532343] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.623 [2024-05-15 04:28:03.532431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.623 [2024-05-15 04:28:03.532497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.623 [2024-05-15 04:28:03.532564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.623 [2024-05-15 04:28:03.532566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:15.879 04:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:15.880 04:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:15.880 ************************************ 00:27:15.880 START TEST spdk_target_abort 00:27:15.880 ************************************ 00:27:15.880 04:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:27:15.880 04:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:15.880 04:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:27:15.880 04:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.880 04:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:19.154 spdk_targetn1 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:19.154 [2024-05-15 04:28:06.546187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:19.154 [2024-05-15 04:28:06.578174] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:19.154 [2024-05-15 04:28:06.578450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.154 04:28:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:19.154 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.428 Initializing NVMe Controllers 00:27:22.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:22.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:22.428 Initialization complete. Launching workers. 00:27:22.428 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8472, failed: 0 00:27:22.428 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1797, failed to submit 6675 00:27:22.428 success 735, unsuccess 1062, failed 0 00:27:22.428 04:28:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:22.428 04:28:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:22.428 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.702 Initializing NVMe Controllers 00:27:25.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:25.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:25.702 Initialization complete. Launching workers. 00:27:25.702 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8551, failed: 0 00:27:25.702 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 7292 00:27:25.702 success 309, unsuccess 950, failed 0 00:27:25.702 04:28:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:25.702 04:28:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:25.702 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.227 Initializing NVMe Controllers 00:27:28.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:28.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:28.227 Initialization complete. Launching workers. 00:27:28.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30951, failed: 0 00:27:28.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2710, failed to submit 28241 00:27:28.227 success 511, unsuccess 2199, failed 0 00:27:28.227 04:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:28.227 04:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.227 04:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.227 04:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.227 04:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:28.227 04:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.227 04:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3514507 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3514507 ']' 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3514507 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3514507 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3514507' 00:27:29.596 killing process with pid 3514507 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3514507 00:27:29.596 [2024-05-15 04:28:17.531620] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:29.596 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3514507 00:27:29.853 00:27:29.853 real 0m14.093s 00:27:29.853 user 0m51.954s 00:27:29.853 sys 0m3.032s 00:27:29.853 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:29.853 04:28:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.853 ************************************ 00:27:29.853 END TEST spdk_target_abort 00:27:29.853 ************************************ 00:27:29.853 04:28:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:29.853 04:28:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:29.853 04:28:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:29.853 04:28:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:29.853 ************************************ 00:27:29.853 START TEST kernel_target_abort 00:27:29.853 ************************************ 00:27:29.853 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:27:29.853 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:29.854 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:30.112 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:30.112 04:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:31.519 Waiting for block devices as requested 00:27:31.519 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:31.519 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:31.519 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:31.519 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:31.519 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:31.778 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:31.778 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:31.778 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:31.778 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:32.036 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:32.036 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:32.036 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:32.036 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:32.296 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:32.296 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:32.296 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:32.555 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:32.555 No valid GPT data, bailing 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:32.555 00:27:32.555 Discovery Log Number of Records 2, Generation counter 2 00:27:32.555 =====Discovery Log Entry 0====== 00:27:32.555 trtype: tcp 00:27:32.555 adrfam: ipv4 00:27:32.555 subtype: current discovery subsystem 00:27:32.555 treq: not specified, sq flow control disable supported 00:27:32.555 portid: 1 00:27:32.555 trsvcid: 4420 00:27:32.555 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:32.555 traddr: 10.0.0.1 00:27:32.555 eflags: none 00:27:32.555 sectype: none 00:27:32.555 =====Discovery Log Entry 1====== 00:27:32.555 trtype: tcp 00:27:32.555 adrfam: ipv4 00:27:32.555 subtype: nvme subsystem 00:27:32.555 treq: not specified, sq flow control disable supported 00:27:32.555 portid: 1 00:27:32.555 trsvcid: 4420 00:27:32.555 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:32.555 traddr: 10.0.0.1 00:27:32.555 eflags: none 00:27:32.555 sectype: none 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:32.555 04:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:32.555 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.832 Initializing NVMe Controllers 00:27:35.832 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:35.832 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:35.832 Initialization complete. Launching workers. 00:27:35.832 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25397, failed: 0 00:27:35.832 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25397, failed to submit 0 00:27:35.832 success 0, unsuccess 25397, failed 0 00:27:35.832 04:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:35.832 04:28:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:35.832 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.108 Initializing NVMe Controllers 00:27:39.108 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:39.108 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:39.108 Initialization complete. Launching workers. 00:27:39.108 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53193, failed: 0 00:27:39.108 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13382, failed to submit 39811 00:27:39.109 success 0, unsuccess 13382, failed 0 00:27:39.109 04:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:39.109 04:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:39.109 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.385 Initializing NVMe Controllers 00:27:42.385 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:42.385 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:42.385 Initialization complete. Launching workers. 00:27:42.385 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55520, failed: 0 00:27:42.385 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13822, failed to submit 41698 00:27:42.385 success 0, unsuccess 13822, failed 0 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:42.385 04:28:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:43.318 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:43.318 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:43.318 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:43.318 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:43.318 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:43.318 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:43.318 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:43.318 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:43.318 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:44.251 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:44.251 00:27:44.251 real 0m14.318s 00:27:44.251 user 0m4.353s 00:27:44.251 sys 0m3.600s 00:27:44.251 04:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:44.251 04:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.251 ************************************ 00:27:44.251 END TEST kernel_target_abort 00:27:44.251 ************************************ 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.251 rmmod nvme_tcp 00:27:44.251 rmmod nvme_fabrics 00:27:44.251 rmmod nvme_keyring 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3514507 ']' 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3514507 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3514507 ']' 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3514507 00:27:44.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3514507) - No such process 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3514507 is not found' 00:27:44.251 Process with pid 3514507 is not found 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:44.251 04:28:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:45.634 Waiting for block devices as requested 00:27:45.634 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:45.895 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:45.895 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:45.895 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:45.895 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:46.153 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:46.153 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:46.153 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:46.153 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:46.413 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:46.413 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:46.413 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:46.413 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:46.672 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:46.673 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:46.673 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:46.673 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:46.931 04:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.931 04:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.931 04:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.931 04:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.931 04:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.931 04:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:46.931 04:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.833 04:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:48.833 00:27:48.833 real 0m38.554s 00:27:48.833 user 0m58.668s 00:27:48.833 sys 0m10.544s 00:27:48.833 04:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:48.833 04:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:48.833 ************************************ 00:27:48.833 END TEST nvmf_abort_qd_sizes 00:27:48.833 ************************************ 00:27:48.833 04:28:36 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:48.833 04:28:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:48.833 04:28:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:48.833 04:28:36 -- common/autotest_common.sh@10 -- # set +x 00:27:48.833 ************************************ 00:27:48.833 START TEST keyring_file 00:27:48.833 ************************************ 00:27:48.833 04:28:36 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:49.125 * Looking for test storage... 00:27:49.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.125 04:28:36 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.125 04:28:36 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.125 04:28:36 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.125 04:28:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.125 04:28:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.125 04:28:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.125 04:28:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:49.125 04:28:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.M0vrX0p7nh 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M0vrX0p7nh 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.M0vrX0p7nh 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.M0vrX0p7nh 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bwO58fHi3W 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:49.125 04:28:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bwO58fHi3W 00:27:49.125 04:28:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bwO58fHi3W 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.bwO58fHi3W 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=3520562 00:27:49.125 04:28:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:49.126 04:28:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3520562 00:27:49.126 04:28:36 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3520562 ']' 00:27:49.126 04:28:36 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.126 04:28:36 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:49.126 04:28:36 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.126 04:28:36 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:49.126 04:28:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:49.126 [2024-05-15 04:28:37.003251] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:27:49.126 [2024-05-15 04:28:37.003346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520562 ] 00:27:49.126 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.126 [2024-05-15 04:28:37.090570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.383 [2024-05-15 04:28:37.237245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:49.641 04:28:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 [2024-05-15 04:28:37.502650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.641 null0 00:27:49.641 [2024-05-15 04:28:37.534664] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:49.641 [2024-05-15 04:28:37.534745] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:49.641 [2024-05-15 04:28:37.535239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:49.641 [2024-05-15 04:28:37.542710] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.641 04:28:37 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 [2024-05-15 04:28:37.550723] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:49.641 request: 00:27:49.641 { 00:27:49.641 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.641 "secure_channel": false, 00:27:49.641 "listen_address": { 00:27:49.641 "trtype": "tcp", 00:27:49.641 "traddr": "127.0.0.1", 00:27:49.641 "trsvcid": "4420" 00:27:49.641 }, 00:27:49.641 "method": "nvmf_subsystem_add_listener", 00:27:49.641 "req_id": 1 00:27:49.641 } 00:27:49.641 Got JSON-RPC error response 00:27:49.641 response: 00:27:49.641 { 00:27:49.641 "code": -32602, 00:27:49.641 "message": "Invalid parameters" 00:27:49.641 } 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:49.641 04:28:37 keyring_file -- keyring/file.sh@46 -- # bperfpid=3520576 00:27:49.641 04:28:37 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:49.641 04:28:37 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3520576 /var/tmp/bperf.sock 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3520576 ']' 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:49.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:49.641 04:28:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 [2024-05-15 04:28:37.599775] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:27:49.641 [2024-05-15 04:28:37.599849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520576 ] 00:27:49.641 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.899 [2024-05-15 04:28:37.676421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.899 [2024-05-15 04:28:37.793482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.830 04:28:38 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:50.830 04:28:38 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:50.830 04:28:38 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:50.830 04:28:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:50.830 04:28:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bwO58fHi3W 00:27:50.830 04:28:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bwO58fHi3W 00:27:51.087 04:28:39 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:51.087 04:28:39 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:51.087 04:28:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:51.087 04:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.087 04:28:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:51.344 04:28:39 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.M0vrX0p7nh == \/\t\m\p\/\t\m\p\.\M\0\v\r\X\0\p\7\n\h ]] 00:27:51.344 04:28:39 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:51.344 04:28:39 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:51.344 04:28:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:51.344 04:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.344 04:28:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:51.602 04:28:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.bwO58fHi3W == \/\t\m\p\/\t\m\p\.\b\w\O\5\8\f\H\i\3\W ]] 00:27:51.602 04:28:39 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:51.602 04:28:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:51.602 04:28:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:51.602 04:28:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:51.602 04:28:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:51.603 04:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.860 04:28:39 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:51.860 04:28:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:51.860 04:28:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:51.860 04:28:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:51.860 04:28:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:51.860 04:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.860 04:28:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:52.119 04:28:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:52.119 04:28:39 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:52.119 04:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:52.376 [2024-05-15 04:28:40.256047] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:52.376 nvme0n1 00:27:52.376 04:28:40 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:52.376 04:28:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:52.376 04:28:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:52.376 04:28:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.376 04:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.376 04:28:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:52.632 04:28:40 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:52.632 04:28:40 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:52.632 04:28:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:52.632 04:28:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:52.632 04:28:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.632 04:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.632 04:28:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:52.888 04:28:40 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:52.888 04:28:40 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.144 Running I/O for 1 seconds... 00:27:54.077 00:27:54.077 Latency(us) 00:27:54.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.077 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:54.077 nvme0n1 : 1.02 3925.18 15.33 0.00 0.00 32362.04 8786.68 50098.63 00:27:54.077 =================================================================================================================== 00:27:54.077 Total : 3925.18 15.33 0.00 0.00 32362.04 8786.68 50098.63 00:27:54.077 0 00:27:54.077 04:28:41 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:54.077 04:28:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:54.334 04:28:42 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:54.334 04:28:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:54.334 04:28:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:54.334 04:28:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:54.334 04:28:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:54.334 04:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:54.591 04:28:42 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:54.591 04:28:42 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:54.591 04:28:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:54.591 04:28:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:54.591 04:28:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:54.591 04:28:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:54.591 04:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:54.848 04:28:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:54.848 04:28:42 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:54.848 04:28:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:54.848 04:28:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:54.848 04:28:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:54.848 04:28:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.849 04:28:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:54.849 04:28:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.849 04:28:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:54.849 04:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:55.106 [2024-05-15 04:28:42.978432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:55.106 [2024-05-15 04:28:42.978527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5bf30 (107): Transport endpoint is not connected 00:27:55.106 [2024-05-15 04:28:42.979518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5bf30 (9): Bad file descriptor 00:27:55.106 [2024-05-15 04:28:42.980518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:55.106 [2024-05-15 04:28:42.980537] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:55.106 [2024-05-15 04:28:42.980566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:55.106 request: 00:27:55.106 { 00:27:55.106 "name": "nvme0", 00:27:55.106 "trtype": "tcp", 00:27:55.106 "traddr": "127.0.0.1", 00:27:55.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.106 "adrfam": "ipv4", 00:27:55.106 "trsvcid": "4420", 00:27:55.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.106 "psk": "key1", 00:27:55.106 "method": "bdev_nvme_attach_controller", 00:27:55.106 "req_id": 1 00:27:55.106 } 00:27:55.106 Got JSON-RPC error response 00:27:55.106 response: 00:27:55.106 { 00:27:55.106 "code": -32602, 00:27:55.106 "message": "Invalid parameters" 00:27:55.106 } 00:27:55.106 04:28:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:55.106 04:28:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:55.106 04:28:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:55.106 04:28:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:55.106 04:28:42 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:55.106 04:28:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:55.106 04:28:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:55.106 04:28:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:55.106 04:28:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:55.106 04:28:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:55.364 04:28:43 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:55.364 04:28:43 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:55.364 04:28:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:55.364 04:28:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:55.364 04:28:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:55.364 04:28:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:55.364 04:28:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:55.620 04:28:43 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:55.620 04:28:43 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:55.620 04:28:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:55.876 04:28:43 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:55.876 04:28:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:56.134 04:28:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:56.134 04:28:43 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:56.134 04:28:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.391 04:28:44 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:56.391 04:28:44 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.M0vrX0p7nh 00:27:56.391 04:28:44 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:56.391 04:28:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:56.391 04:28:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:56.391 04:28:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:56.391 04:28:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.391 04:28:44 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:56.391 04:28:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.391 04:28:44 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:56.391 04:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:56.648 [2024-05-15 04:28:44.469410] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.M0vrX0p7nh': 0100660 00:27:56.648 [2024-05-15 04:28:44.469449] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:56.648 request: 00:27:56.648 { 00:27:56.648 "name": "key0", 00:27:56.648 "path": "/tmp/tmp.M0vrX0p7nh", 00:27:56.648 "method": "keyring_file_add_key", 00:27:56.648 "req_id": 1 00:27:56.648 } 00:27:56.648 Got JSON-RPC error response 00:27:56.648 response: 00:27:56.648 { 00:27:56.648 "code": -1, 00:27:56.648 "message": "Operation not permitted" 00:27:56.648 } 00:27:56.648 04:28:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:56.648 04:28:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:56.648 04:28:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:56.648 04:28:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:56.648 04:28:44 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.M0vrX0p7nh 00:27:56.648 04:28:44 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:56.649 04:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M0vrX0p7nh 00:27:56.906 04:28:44 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.M0vrX0p7nh 00:27:56.907 04:28:44 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:56.907 04:28:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:56.907 04:28:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.907 04:28:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.907 04:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.907 04:28:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:57.165 04:28:44 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:57.165 04:28:44 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:57.165 04:28:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:57.165 04:28:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:57.165 04:28:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:57.166 04:28:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.166 04:28:44 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:57.166 04:28:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.166 04:28:44 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:57.166 04:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:57.425 [2024-05-15 04:28:45.215442] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.M0vrX0p7nh': No such file or directory 00:27:57.425 [2024-05-15 04:28:45.215478] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:57.425 [2024-05-15 04:28:45.215511] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:57.425 [2024-05-15 04:28:45.215524] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:57.425 [2024-05-15 04:28:45.215538] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:57.425 request: 00:27:57.425 { 00:27:57.425 "name": "nvme0", 00:27:57.425 "trtype": "tcp", 00:27:57.425 "traddr": "127.0.0.1", 00:27:57.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.425 "adrfam": "ipv4", 00:27:57.425 "trsvcid": "4420", 00:27:57.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.425 "psk": "key0", 00:27:57.425 "method": "bdev_nvme_attach_controller", 00:27:57.425 "req_id": 1 00:27:57.425 } 00:27:57.425 Got JSON-RPC error response 00:27:57.425 response: 00:27:57.425 { 00:27:57.425 "code": -19, 00:27:57.425 "message": "No such device" 00:27:57.425 } 00:27:57.425 04:28:45 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:57.425 04:28:45 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:57.425 04:28:45 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:57.425 04:28:45 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:57.425 04:28:45 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:57.425 04:28:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:57.683 04:28:45 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sDOcSPIkis 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:57.683 04:28:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:57.683 04:28:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:57.683 04:28:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:57.683 04:28:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:57.683 04:28:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:57.683 04:28:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sDOcSPIkis 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sDOcSPIkis 00:27:57.683 04:28:45 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.sDOcSPIkis 00:27:57.683 04:28:45 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sDOcSPIkis 00:27:57.683 04:28:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sDOcSPIkis 00:27:57.940 04:28:45 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:57.940 04:28:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.198 nvme0n1 00:27:58.198 04:28:46 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:58.198 04:28:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:58.198 04:28:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:58.198 04:28:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.198 04:28:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.198 04:28:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:58.454 04:28:46 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:58.454 04:28:46 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:58.454 04:28:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:58.710 04:28:46 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:58.710 04:28:46 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:58.710 04:28:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.710 04:28:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.710 04:28:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:58.967 04:28:46 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:58.967 04:28:46 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:58.967 04:28:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:58.967 04:28:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:58.967 04:28:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.967 04:28:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.967 04:28:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:59.224 04:28:47 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:59.224 04:28:47 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:59.225 04:28:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:59.482 04:28:47 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:59.482 04:28:47 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:59.482 04:28:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.740 04:28:47 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:59.740 04:28:47 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sDOcSPIkis 00:27:59.740 04:28:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sDOcSPIkis 00:27:59.997 04:28:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bwO58fHi3W 00:27:59.997 04:28:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bwO58fHi3W 00:28:00.255 04:28:48 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:00.255 04:28:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:00.512 nvme0n1 00:28:00.512 04:28:48 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:00.512 04:28:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:00.770 04:28:48 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:00.770 "subsystems": [ 00:28:00.770 { 00:28:00.770 "subsystem": "keyring", 00:28:00.770 "config": [ 00:28:00.770 { 00:28:00.770 "method": "keyring_file_add_key", 00:28:00.770 "params": { 00:28:00.770 "name": "key0", 00:28:00.770 "path": "/tmp/tmp.sDOcSPIkis" 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "keyring_file_add_key", 00:28:00.770 "params": { 00:28:00.770 "name": "key1", 00:28:00.770 "path": "/tmp/tmp.bwO58fHi3W" 00:28:00.770 } 00:28:00.770 } 00:28:00.770 ] 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "subsystem": "iobuf", 00:28:00.770 "config": [ 00:28:00.770 { 00:28:00.770 "method": "iobuf_set_options", 00:28:00.770 "params": { 00:28:00.770 "small_pool_count": 8192, 00:28:00.770 "large_pool_count": 1024, 00:28:00.770 "small_bufsize": 8192, 00:28:00.770 "large_bufsize": 135168 00:28:00.770 } 00:28:00.770 } 00:28:00.770 ] 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "subsystem": "sock", 00:28:00.770 "config": [ 00:28:00.770 { 00:28:00.770 "method": "sock_impl_set_options", 00:28:00.770 "params": { 00:28:00.770 "impl_name": "posix", 00:28:00.770 "recv_buf_size": 2097152, 00:28:00.770 "send_buf_size": 2097152, 00:28:00.770 "enable_recv_pipe": true, 00:28:00.770 "enable_quickack": false, 00:28:00.770 "enable_placement_id": 0, 00:28:00.770 "enable_zerocopy_send_server": true, 00:28:00.770 "enable_zerocopy_send_client": false, 00:28:00.770 "zerocopy_threshold": 0, 00:28:00.770 "tls_version": 0, 00:28:00.770 "enable_ktls": false 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "sock_impl_set_options", 00:28:00.770 "params": { 00:28:00.770 "impl_name": "ssl", 00:28:00.770 "recv_buf_size": 4096, 00:28:00.770 "send_buf_size": 4096, 00:28:00.770 "enable_recv_pipe": true, 00:28:00.770 "enable_quickack": false, 00:28:00.770 "enable_placement_id": 0, 00:28:00.770 "enable_zerocopy_send_server": true, 00:28:00.770 "enable_zerocopy_send_client": false, 00:28:00.770 "zerocopy_threshold": 0, 00:28:00.770 "tls_version": 0, 00:28:00.770 "enable_ktls": false 00:28:00.770 } 00:28:00.770 } 00:28:00.770 ] 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "subsystem": "vmd", 00:28:00.770 "config": [] 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "subsystem": "accel", 00:28:00.770 "config": [ 00:28:00.770 { 00:28:00.770 "method": "accel_set_options", 00:28:00.770 "params": { 00:28:00.770 "small_cache_size": 128, 00:28:00.770 "large_cache_size": 16, 00:28:00.770 "task_count": 2048, 00:28:00.770 "sequence_count": 2048, 00:28:00.770 "buf_count": 2048 00:28:00.770 } 00:28:00.770 } 00:28:00.770 ] 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "subsystem": "bdev", 00:28:00.770 "config": [ 00:28:00.770 { 00:28:00.770 "method": "bdev_set_options", 00:28:00.770 "params": { 00:28:00.770 "bdev_io_pool_size": 65535, 00:28:00.770 "bdev_io_cache_size": 256, 00:28:00.770 "bdev_auto_examine": true, 00:28:00.770 "iobuf_small_cache_size": 128, 00:28:00.770 "iobuf_large_cache_size": 16 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "bdev_raid_set_options", 00:28:00.770 "params": { 00:28:00.770 "process_window_size_kb": 1024 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "bdev_iscsi_set_options", 00:28:00.770 "params": { 00:28:00.770 "timeout_sec": 30 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "bdev_nvme_set_options", 00:28:00.770 "params": { 00:28:00.770 "action_on_timeout": "none", 00:28:00.770 "timeout_us": 0, 00:28:00.770 "timeout_admin_us": 0, 00:28:00.770 "keep_alive_timeout_ms": 10000, 00:28:00.770 "arbitration_burst": 0, 00:28:00.770 "low_priority_weight": 0, 00:28:00.770 "medium_priority_weight": 0, 00:28:00.770 "high_priority_weight": 0, 00:28:00.770 "nvme_adminq_poll_period_us": 10000, 00:28:00.770 "nvme_ioq_poll_period_us": 0, 00:28:00.770 "io_queue_requests": 512, 00:28:00.770 "delay_cmd_submit": true, 00:28:00.770 "transport_retry_count": 4, 00:28:00.770 "bdev_retry_count": 3, 00:28:00.770 "transport_ack_timeout": 0, 00:28:00.770 "ctrlr_loss_timeout_sec": 0, 00:28:00.770 "reconnect_delay_sec": 0, 00:28:00.770 "fast_io_fail_timeout_sec": 0, 00:28:00.770 "disable_auto_failback": false, 00:28:00.770 "generate_uuids": false, 00:28:00.770 "transport_tos": 0, 00:28:00.770 "nvme_error_stat": false, 00:28:00.770 "rdma_srq_size": 0, 00:28:00.770 "io_path_stat": false, 00:28:00.770 "allow_accel_sequence": false, 00:28:00.770 "rdma_max_cq_size": 0, 00:28:00.770 "rdma_cm_event_timeout_ms": 0, 00:28:00.770 "dhchap_digests": [ 00:28:00.770 "sha256", 00:28:00.770 "sha384", 00:28:00.770 "sha512" 00:28:00.770 ], 00:28:00.770 "dhchap_dhgroups": [ 00:28:00.770 "null", 00:28:00.770 "ffdhe2048", 00:28:00.770 "ffdhe3072", 00:28:00.770 "ffdhe4096", 00:28:00.770 "ffdhe6144", 00:28:00.770 "ffdhe8192" 00:28:00.770 ] 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "bdev_nvme_attach_controller", 00:28:00.770 "params": { 00:28:00.770 "name": "nvme0", 00:28:00.770 "trtype": "TCP", 00:28:00.770 "adrfam": "IPv4", 00:28:00.770 "traddr": "127.0.0.1", 00:28:00.770 "trsvcid": "4420", 00:28:00.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.770 "prchk_reftag": false, 00:28:00.770 "prchk_guard": false, 00:28:00.770 "ctrlr_loss_timeout_sec": 0, 00:28:00.770 "reconnect_delay_sec": 0, 00:28:00.770 "fast_io_fail_timeout_sec": 0, 00:28:00.770 "psk": "key0", 00:28:00.770 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:00.770 "hdgst": false, 00:28:00.770 "ddgst": false 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "bdev_nvme_set_hotplug", 00:28:00.770 "params": { 00:28:00.770 "period_us": 100000, 00:28:00.770 "enable": false 00:28:00.770 } 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "method": "bdev_wait_for_examine" 00:28:00.770 } 00:28:00.770 ] 00:28:00.770 }, 00:28:00.770 { 00:28:00.770 "subsystem": "nbd", 00:28:00.770 "config": [] 00:28:00.770 } 00:28:00.770 ] 00:28:00.770 }' 00:28:00.770 04:28:48 keyring_file -- keyring/file.sh@114 -- # killprocess 3520576 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3520576 ']' 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3520576 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@951 -- # uname 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3520576 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3520576' 00:28:00.770 killing process with pid 3520576 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@965 -- # kill 3520576 00:28:00.770 Received shutdown signal, test time was about 1.000000 seconds 00:28:00.770 00:28:00.770 Latency(us) 00:28:00.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.770 =================================================================================================================== 00:28:00.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.770 04:28:48 keyring_file -- common/autotest_common.sh@970 -- # wait 3520576 00:28:01.028 04:28:48 keyring_file -- keyring/file.sh@117 -- # bperfpid=3522046 00:28:01.028 04:28:48 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3522046 /var/tmp/bperf.sock 00:28:01.028 04:28:48 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3522046 ']' 00:28:01.028 04:28:48 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.028 04:28:48 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:01.028 04:28:48 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:01.029 04:28:48 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.029 04:28:48 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:01.029 "subsystems": [ 00:28:01.029 { 00:28:01.029 "subsystem": "keyring", 00:28:01.029 "config": [ 00:28:01.029 { 00:28:01.029 "method": "keyring_file_add_key", 00:28:01.029 "params": { 00:28:01.029 "name": "key0", 00:28:01.029 "path": "/tmp/tmp.sDOcSPIkis" 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "keyring_file_add_key", 00:28:01.029 "params": { 00:28:01.029 "name": "key1", 00:28:01.029 "path": "/tmp/tmp.bwO58fHi3W" 00:28:01.029 } 00:28:01.029 } 00:28:01.029 ] 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "subsystem": "iobuf", 00:28:01.029 "config": [ 00:28:01.029 { 00:28:01.029 "method": "iobuf_set_options", 00:28:01.029 "params": { 00:28:01.029 "small_pool_count": 8192, 00:28:01.029 "large_pool_count": 1024, 00:28:01.029 "small_bufsize": 8192, 00:28:01.029 "large_bufsize": 135168 00:28:01.029 } 00:28:01.029 } 00:28:01.029 ] 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "subsystem": "sock", 00:28:01.029 "config": [ 00:28:01.029 { 00:28:01.029 "method": "sock_impl_set_options", 00:28:01.029 "params": { 00:28:01.029 "impl_name": "posix", 00:28:01.029 "recv_buf_size": 2097152, 00:28:01.029 "send_buf_size": 2097152, 00:28:01.029 "enable_recv_pipe": true, 00:28:01.029 "enable_quickack": false, 00:28:01.029 "enable_placement_id": 0, 00:28:01.029 "enable_zerocopy_send_server": true, 00:28:01.029 "enable_zerocopy_send_client": false, 00:28:01.029 "zerocopy_threshold": 0, 00:28:01.029 "tls_version": 0, 00:28:01.029 "enable_ktls": false 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "sock_impl_set_options", 00:28:01.029 "params": { 00:28:01.029 "impl_name": "ssl", 00:28:01.029 "recv_buf_size": 4096, 00:28:01.029 "send_buf_size": 4096, 00:28:01.029 "enable_recv_pipe": true, 00:28:01.029 "enable_quickack": false, 00:28:01.029 "enable_placement_id": 0, 00:28:01.029 "enable_zerocopy_send_server": true, 00:28:01.029 "enable_zerocopy_send_client": false, 00:28:01.029 "zerocopy_threshold": 0, 00:28:01.029 "tls_version": 0, 00:28:01.029 "enable_ktls": false 00:28:01.029 } 00:28:01.029 } 00:28:01.029 ] 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "subsystem": "vmd", 00:28:01.029 "config": [] 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "subsystem": "accel", 00:28:01.029 "config": [ 00:28:01.029 { 00:28:01.029 "method": "accel_set_options", 00:28:01.029 "params": { 00:28:01.029 "small_cache_size": 128, 00:28:01.029 "large_cache_size": 16, 00:28:01.029 "task_count": 2048, 00:28:01.029 "sequence_count": 2048, 00:28:01.029 "buf_count": 2048 00:28:01.029 } 00:28:01.029 } 00:28:01.029 ] 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "subsystem": "bdev", 00:28:01.029 "config": [ 00:28:01.029 { 00:28:01.029 "method": "bdev_set_options", 00:28:01.029 "params": { 00:28:01.029 "bdev_io_pool_size": 65535, 00:28:01.029 "bdev_io_cache_size": 256, 00:28:01.029 "bdev_auto_examine": true, 00:28:01.029 "iobuf_small_cache_size": 128, 00:28:01.029 "iobuf_large_cache_size": 16 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "bdev_raid_set_options", 00:28:01.029 "params": { 00:28:01.029 "process_window_size_kb": 1024 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "bdev_iscsi_set_options", 00:28:01.029 "params": { 00:28:01.029 "timeout_sec": 30 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "bdev_nvme_set_options", 00:28:01.029 "params": { 00:28:01.029 "action_on_timeout": "none", 00:28:01.029 "timeout_us": 0, 00:28:01.029 "timeout_admin_us": 0, 00:28:01.029 "keep_alive_timeout_ms": 10000, 00:28:01.029 "arbitration_burst": 0, 00:28:01.029 "low_priority_weight": 0, 00:28:01.029 "medium_priority_weight": 0, 00:28:01.029 "high_priority_weight": 0, 00:28:01.029 "nvme_adminq_poll_period_us": 10000, 00:28:01.029 "nvme_ioq_poll_period_us": 0, 00:28:01.029 "io_queue_requests": 512, 00:28:01.029 "delay_cmd_submit": true, 00:28:01.029 "transport_retry_count": 4, 00:28:01.029 "bdev_retry_count": 3, 00:28:01.029 "transport_ack_timeout": 0, 00:28:01.029 "ctrlr_loss_timeout_sec": 0, 00:28:01.029 "reconnect_delay_sec": 0, 00:28:01.029 "fast_io_fail_timeout_sec": 0, 00:28:01.029 "disable_auto_failback": false, 00:28:01.029 "generate_uuids": false, 00:28:01.029 "transport_tos": 0, 00:28:01.029 "nvme_error_stat": false, 00:28:01.029 "rdma_srq_size": 0, 00:28:01.029 "io_path_stat": false, 00:28:01.029 "allow_accel_sequence": false, 00:28:01.029 "rdma_max_cq_size": 0, 00:28:01.029 "rdma_cm_event_timeout_ms": 0, 00:28:01.029 "dhchap_digests": [ 00:28:01.029 "sha256", 00:28:01.029 "sha384", 00:28:01.029 "sha512" 00:28:01.029 ], 00:28:01.029 "dhchap_dhgroups": [ 00:28:01.029 "null", 00:28:01.029 "ffdhe2048", 00:28:01.029 "ffdhe3072", 00:28:01.029 "ffdhe4096", 00:28:01.029 "ffdhe6144", 00:28:01.029 "ffdhe8192" 00:28:01.029 ] 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "bdev_nvme_attach_controller", 00:28:01.029 "params": { 00:28:01.029 "name": "nvme0", 00:28:01.029 "trtype": "TCP", 00:28:01.029 "adrfam": "IPv4", 00:28:01.029 "traddr": "127.0.0.1", 00:28:01.029 "trsvcid": "4420", 00:28:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.029 "prchk_reftag": false, 00:28:01.029 "prchk_guard": false, 00:28:01.029 "ctrlr_loss_timeout_sec": 0, 00:28:01.029 "reconnect_delay_sec": 0, 00:28:01.029 "fast_io_fail_timeout_sec": 0, 00:28:01.029 "psk": "key0", 00:28:01.029 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.029 "hdgst": false, 00:28:01.029 "ddgst": false 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "bdev_nvme_set_hotplug", 00:28:01.029 "params": { 00:28:01.029 "period_us": 100000, 00:28:01.029 "enable": false 00:28:01.029 } 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "method": "bdev_wait_for_examine" 00:28:01.029 } 00:28:01.029 ] 00:28:01.029 }, 00:28:01.029 { 00:28:01.029 "subsystem": "nbd", 00:28:01.029 "config": [] 00:28:01.029 } 00:28:01.029 ] 00:28:01.029 }' 00:28:01.029 04:28:48 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:01.029 04:28:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:01.030 [2024-05-15 04:28:49.011784] Starting SPDK v24.05-pre git sha1 2dc74a001 / DPDK 23.11.0 initialization... 00:28:01.030 [2024-05-15 04:28:49.011864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522046 ] 00:28:01.030 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.287 [2024-05-15 04:28:49.080439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.287 [2024-05-15 04:28:49.196137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.545 [2024-05-15 04:28:49.379038] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:02.174 04:28:49 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.174 04:28:49 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:28:02.174 04:28:49 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:02.174 04:28:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.174 04:28:49 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:02.431 04:28:50 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:02.431 04:28:50 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:02.431 04:28:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:02.431 04:28:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:02.431 04:28:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:02.431 04:28:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.431 04:28:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:02.690 04:28:50 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:02.690 04:28:50 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:02.690 04:28:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:02.690 04:28:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:02.690 04:28:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:02.690 04:28:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.690 04:28:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:02.946 04:28:50 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:02.946 04:28:50 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:02.946 04:28:50 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:02.946 04:28:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:03.203 04:28:50 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:03.203 04:28:50 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:03.203 04:28:50 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sDOcSPIkis /tmp/tmp.bwO58fHi3W 00:28:03.203 04:28:50 keyring_file -- keyring/file.sh@20 -- # killprocess 3522046 00:28:03.203 04:28:50 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3522046 ']' 00:28:03.203 04:28:50 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3522046 00:28:03.203 04:28:50 keyring_file -- common/autotest_common.sh@951 -- # uname 00:28:03.203 04:28:50 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.203 04:28:50 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3522046 00:28:03.203 04:28:51 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:03.203 04:28:51 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:03.203 04:28:51 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3522046' 00:28:03.203 killing process with pid 3522046 00:28:03.203 04:28:51 keyring_file -- common/autotest_common.sh@965 -- # kill 3522046 00:28:03.203 Received shutdown signal, test time was about 1.000000 seconds 00:28:03.203 00:28:03.203 Latency(us) 00:28:03.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.203 =================================================================================================================== 00:28:03.203 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:03.203 04:28:51 keyring_file -- common/autotest_common.sh@970 -- # wait 3522046 00:28:03.460 04:28:51 keyring_file -- keyring/file.sh@21 -- # killprocess 3520562 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3520562 ']' 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3520562 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@951 -- # uname 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3520562 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3520562' 00:28:03.460 killing process with pid 3520562 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@965 -- # kill 3520562 00:28:03.460 [2024-05-15 04:28:51.302960] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:03.460 [2024-05-15 04:28:51.303027] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:03.460 04:28:51 keyring_file -- common/autotest_common.sh@970 -- # wait 3520562 00:28:04.027 00:28:04.027 real 0m14.954s 00:28:04.027 user 0m36.384s 00:28:04.027 sys 0m3.364s 00:28:04.027 04:28:51 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:04.027 04:28:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:04.027 ************************************ 00:28:04.027 END TEST keyring_file 00:28:04.027 ************************************ 00:28:04.027 04:28:51 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:28:04.027 04:28:51 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:04.027 04:28:51 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:28:04.027 04:28:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:04.027 04:28:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:04.027 04:28:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:04.027 04:28:51 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:28:04.027 04:28:51 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:28:04.027 04:28:51 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:04.027 04:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:04.027 04:28:51 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:28:04.027 04:28:51 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:28:04.027 04:28:51 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:28:04.027 04:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:05.927 INFO: APP EXITING 00:28:05.927 INFO: killing all VMs 00:28:05.927 INFO: killing vhost app 00:28:05.927 INFO: EXIT DONE 00:28:06.863 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:06.863 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:06.863 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:06.863 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:06.863 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:06.863 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:06.863 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:06.863 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:06.863 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:07.121 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:07.121 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:07.121 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:07.121 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:07.121 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:07.121 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:07.121 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:07.121 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:08.497 Cleaning 00:28:08.497 Removing: /var/run/dpdk/spdk0/config 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:08.497 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:08.497 Removing: /var/run/dpdk/spdk1/config 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:08.497 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:08.497 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:08.497 Removing: /var/run/dpdk/spdk2/config 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:08.497 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:08.497 Removing: /var/run/dpdk/spdk3/config 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:08.497 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:08.497 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:08.497 Removing: /var/run/dpdk/spdk4/config 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:08.498 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:08.498 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:08.498 Removing: /dev/shm/bdev_svc_trace.1 00:28:08.498 Removing: /dev/shm/nvmf_trace.0 00:28:08.498 Removing: /dev/shm/spdk_tgt_trace.pid3242835 00:28:08.498 Removing: /var/run/dpdk/spdk0 00:28:08.498 Removing: /var/run/dpdk/spdk1 00:28:08.498 Removing: /var/run/dpdk/spdk2 00:28:08.498 Removing: /var/run/dpdk/spdk3 00:28:08.498 Removing: /var/run/dpdk/spdk4 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3241195 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3241926 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3242835 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3243181 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3243870 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3244134 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3244854 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3244870 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3245112 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3246374 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3247343 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3247656 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3247850 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3248174 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3248370 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3248528 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3248682 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3248938 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3249450 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3251803 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3252085 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3252256 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3252392 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3252822 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3252826 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3253223 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3253277 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3253566 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3253571 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3253735 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3253871 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3254234 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3254392 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3254709 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3254886 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3255024 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3255102 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3255324 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3255537 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3255694 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3255962 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3256131 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3256285 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3256559 00:28:08.498 Removing: /var/run/dpdk/spdk_pid3256720 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3256877 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3257164 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3257414 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3257581 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3257855 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3258016 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3258464 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3258946 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3259116 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3259394 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3259545 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3259717 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3259894 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3260110 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3262573 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3291448 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3294398 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3302382 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3305963 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3308878 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3309278 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3317370 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3317372 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3318029 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3318587 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3319232 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3319631 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3319637 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3319896 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3320027 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3320034 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3320581 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3321228 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3321892 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3322291 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3322299 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3322552 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3323535 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3324298 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3330150 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3330480 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3333921 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3338035 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3340088 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3347454 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3353493 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3354690 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3355468 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3366796 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3369305 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3393870 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3397694 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3398871 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3400188 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3400338 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3400486 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3400626 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3401063 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3402374 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3403114 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3403542 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3405170 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3405727 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3406413 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3409224 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3415959 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3418622 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3422808 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3423877 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3424996 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3428220 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3431237 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3436278 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3436283 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3439476 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3439625 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3439860 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3440129 00:28:08.757 Removing: /var/run/dpdk/spdk_pid3440140 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3443049 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3443375 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3446325 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3448315 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3452147 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3455897 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3462532 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3467496 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3467498 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3481024 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3481434 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3481968 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3482381 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3483203 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3483635 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3484174 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3484696 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3487519 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3487765 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3491964 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3492014 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3493613 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3499066 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3499072 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3502482 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3503992 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3505900 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3506733 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3508171 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3508934 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3514850 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3515200 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3515592 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3517242 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3517540 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3517920 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3520562 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3520576 00:28:08.758 Removing: /var/run/dpdk/spdk_pid3522046 00:28:08.758 Clean 00:28:09.017 04:28:56 -- common/autotest_common.sh@1447 -- # return 0 00:28:09.017 04:28:56 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:28:09.017 04:28:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.017 04:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:09.017 04:28:56 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:28:09.017 04:28:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.017 04:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:09.017 04:28:56 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:09.017 04:28:56 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:09.017 04:28:56 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:09.017 04:28:56 -- spdk/autotest.sh@387 -- # hash lcov 00:28:09.017 04:28:56 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:09.017 04:28:56 -- spdk/autotest.sh@389 -- # hostname 00:28:09.017 04:28:56 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:09.274 geninfo: WARNING: invalid characters removed from testname! 00:28:41.334 04:29:23 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:41.334 04:29:27 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:43.235 04:29:30 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:45.798 04:29:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:49.082 04:29:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:51.618 04:29:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:54.908 04:29:42 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:54.908 04:29:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.908 04:29:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:54.908 04:29:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.908 04:29:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.909 04:29:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.909 04:29:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.909 04:29:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.909 04:29:42 -- paths/export.sh@5 -- $ export PATH 00:28:54.909 04:29:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.909 04:29:42 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:54.909 04:29:42 -- common/autobuild_common.sh@437 -- $ date +%s 00:28:54.909 04:29:42 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715740182.XXXXXX 00:28:54.909 04:29:42 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715740182.27DONa 00:28:54.909 04:29:42 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:28:54.909 04:29:42 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:28:54.909 04:29:42 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:54.909 04:29:42 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:54.909 04:29:42 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:54.909 04:29:42 -- common/autobuild_common.sh@453 -- $ get_config_params 00:28:54.909 04:29:42 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:28:54.909 04:29:42 -- common/autotest_common.sh@10 -- $ set +x 00:28:54.909 04:29:42 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:54.909 04:29:42 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:28:54.909 04:29:42 -- pm/common@17 -- $ local monitor 00:28:54.909 04:29:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.909 04:29:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.909 04:29:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.909 04:29:42 -- pm/common@21 -- $ date +%s 00:28:54.909 04:29:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.909 04:29:42 -- pm/common@21 -- $ date +%s 00:28:54.909 04:29:42 -- pm/common@25 -- $ sleep 1 00:28:54.909 04:29:42 -- pm/common@21 -- $ date +%s 00:28:54.909 04:29:42 -- pm/common@21 -- $ date +%s 00:28:54.909 04:29:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715740182 00:28:54.909 04:29:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715740182 00:28:54.909 04:29:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715740182 00:28:54.909 04:29:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715740182 00:28:54.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715740182_collect-vmstat.pm.log 00:28:54.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715740182_collect-cpu-load.pm.log 00:28:54.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715740182_collect-cpu-temp.pm.log 00:28:54.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715740182_collect-bmc-pm.bmc.pm.log 00:28:55.478 04:29:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:28:55.478 04:29:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:55.478 04:29:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.478 04:29:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:55.478 04:29:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:55.478 04:29:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:55.478 04:29:43 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:55.478 04:29:43 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:55.478 04:29:43 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:55.737 04:29:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:55.737 04:29:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:55.737 04:29:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:55.737 04:29:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:55.737 04:29:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.738 04:29:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:55.738 04:29:43 -- pm/common@44 -- $ pid=3531871 00:28:55.738 04:29:43 -- pm/common@50 -- $ kill -TERM 3531871 00:28:55.738 04:29:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.738 04:29:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:55.738 04:29:43 -- pm/common@44 -- $ pid=3531873 00:28:55.738 04:29:43 -- pm/common@50 -- $ kill -TERM 3531873 00:28:55.738 04:29:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.738 04:29:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:55.738 04:29:43 -- pm/common@44 -- $ pid=3531875 00:28:55.738 04:29:43 -- pm/common@50 -- $ kill -TERM 3531875 00:28:55.738 04:29:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.738 04:29:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:55.738 04:29:43 -- pm/common@44 -- $ pid=3531909 00:28:55.738 04:29:43 -- pm/common@50 -- $ sudo -E kill -TERM 3531909 00:28:55.738 + [[ -n 3155379 ]] 00:28:55.738 + sudo kill 3155379 00:28:55.746 [Pipeline] } 00:28:55.763 [Pipeline] // stage 00:28:55.768 [Pipeline] } 00:28:55.784 [Pipeline] // timeout 00:28:55.788 [Pipeline] } 00:28:55.805 [Pipeline] // catchError 00:28:55.810 [Pipeline] } 00:28:55.826 [Pipeline] // wrap 00:28:55.832 [Pipeline] } 00:28:55.847 [Pipeline] // catchError 00:28:55.853 [Pipeline] stage 00:28:55.854 [Pipeline] { (Epilogue) 00:28:55.865 [Pipeline] catchError 00:28:55.866 [Pipeline] { 00:28:55.879 [Pipeline] echo 00:28:55.881 Cleanup processes 00:28:55.886 [Pipeline] sh 00:28:56.164 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:56.164 3532006 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:56.164 3532136 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:56.178 [Pipeline] sh 00:28:56.458 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:56.458 ++ grep -v 'sudo pgrep' 00:28:56.458 ++ awk '{print $1}' 00:28:56.458 + sudo kill -9 3532006 00:28:56.470 [Pipeline] sh 00:28:56.748 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:04.920 [Pipeline] sh 00:29:05.203 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:05.203 Artifacts sizes are good 00:29:05.217 [Pipeline] archiveArtifacts 00:29:05.223 Archiving artifacts 00:29:05.423 [Pipeline] sh 00:29:05.700 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:05.710 [Pipeline] cleanWs 00:29:05.716 [WS-CLEANUP] Deleting project workspace... 00:29:05.716 [WS-CLEANUP] Deferred wipeout is used... 00:29:05.721 [WS-CLEANUP] done 00:29:05.722 [Pipeline] } 00:29:05.733 [Pipeline] // catchError 00:29:05.741 [Pipeline] sh 00:29:06.013 + logger -p user.info -t JENKINS-CI 00:29:06.021 [Pipeline] } 00:29:06.037 [Pipeline] // stage 00:29:06.042 [Pipeline] } 00:29:06.056 [Pipeline] // node 00:29:06.060 [Pipeline] End of Pipeline 00:29:06.089 Finished: SUCCESS